id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
214323140 | pes2o/s2orc | v3-fos-license | Clinical and functional outcomes after posterior decompression for prolapsed intervertebral disc at lumbosacral spine: A Prospective study
Introduction: Lumbar decompression surgery is one of the most commonly performed spine surgery with numerous variations in the techniques till date. Aim: the aim of the present study was to assess the clinical and functional outcome in patients with lumbosacral disc prolapse after decompression surgery. Materials and Method: A prospective study was conducted on 25 patients with single level lumbar disc prolapse between June 2016 and June 2017. The inclusion criteria were intervertebral disc prolapse at L3-L4/L4-L5 or L5-S1 level with symptoms of radiculopathy for more than 6 weeks who failed conservative line of management. Patients with marked instability, history of previous spine surgery, multiple level involvement and patients with infection were excluded from the study. Results: The mean age of the patients was 44.56+/4.34 years. The mean duration of symptoms was 30.24+/-6.20 weeks. The average duration of surgery was 89.32+/-21/15 mins. 52% of the patients had L4-L5 level involvement. The average duration of stay in the hospital was 4.72+/-2.08 days. The preoperative Visual analog scale score was 6.24+/2.34 which decreased to 1.72+/0.65 at 24 months. The Mean Oswestry Disability Index questionnaire score was 63.83 pre-operatively which decreased significantly to 19.18 at the end of 12 and 24 months (P< 0.001). Conclusion: Lumbar decompression surgery by standard posterior approach gives a good functional outcome in patients with lumbar disc prolapse and symptoms of radiculopathy without instability.
Introduction
Lumbar disc herniation is one of the leading cause for consultation in a spine office. The lifetime incidence of sciatica has been reported to be between 13 and 40% [1] . Majority of the patients with disc prolapse presenting with radiculopathy respond well to conservative line of management while surgery is required in only 2-10% of the cases [2,3] . The conservative methods include rest, non-steroidal anti-inflammatory drugs, physical therapy, ozone therapy and transforaminal or epidural steroid injection. There has been an evolution in the operating techniques for disc prolapse from standard discectomy to microdiscectomy and micro endoscopic discectomy with varying rates of success. The aim of the present study was to assess the functional and clinical outcomes in patients with prolapsed intervertebral disc at lumbosacral spine treated by standard posterior decompression surgery.
Material and Methods
A prospective study was conducted at a tertiary care institute in Navi Mumbai on 25 patients with single level lumbar disc prolapse between June 2016 and June 2017. The inclusion criteria were intervertebral disc prolapse at L3-L4/L4-L5 or L5-S1 level with symptoms of radiculopathy for more than 6 weeks who failed conservative line of management. Patients with marked instability, history of previous spine surgery, multiple level involvement and patients with infection were excluded from the study. All the patients were examined clinically (Positive root irritation signs) and the diagnosis was confirmed on magnetic resonance imaging studies before enrolling them in the study. All the patients were explained in detail about the procedure. Ethical committee approval was obtained prior to the commencement of the study.
Procedure
All the patients were operated under general anaesthesia. Prone position on a Jackson table with head on a gelatine headrest and both the shoulders and elbows in 90 degrees' flexion was given. Proper padding was done below the elbows and knees. A total of three doses of second generation cephalosporin were administered intravenously (first dose 30 mins before the incision and two doses at 12 hours apart postoperatively). The affected level was marked with a sterile 18G needle using fluoroscopy in both the orthogonal views. Local infiltration of adrenaline with normal saline (1:300) was done at the incision site. A linear incision of about 3-5cms was taken 0.5 cm off the midline was taken. Dorsolumbar fascia was separated followed by insertion of a self-retaining retractor into pathological place. Ligamentum flavum was sharply incised and removed. Laminotomy was performed on the pathological side followed by separation of the nerve root. The extruded disc was then removed with a William pituitary rongeur and the decompression was completed. The foramen was addressed using an up and down going roungeur. Bleeders were identified and cauterized. After thorough inspection of the disc remnants, decompression on the nerve root was checked followed by a wound wash and meticulous closure. No drain was used in the present study in any f the case due to limited exposure. The final outcome was measured using the visual analog scale and Mean Oswestry Disability Index questionnaire pre-operatively and then at 6, 12 and 24 months respectively.
Statistical analysis
Two sample independent t-test was used to assess the Mean Oswestry Disability Index questionnaire. The results were expressed as mean with standard deviation and p< 0.05 was considered to be statistically significant. Analysis was done using the Epi-info software (Version 3.4.3) and Microsoft Excel 2013 (Microsoft Office v15.0).
Results-
There were 16 (64%) males and 09 (36%) females in the present study. The mean age of the patients was 44.56+/-4.34 years. The mean duration of symptoms was 30.24+/-6.20 weeks. The average duration of surgery was 89.32+/-21/15 mins. Thirteen (52%) patients had L4-L5 level involvement, 08 (32%) patients had L5-S1 whereas 04 (16%) patients had L3-L4 level involvement respectively. The average duration of stay in the hospital was 4.72+/-2.08 days. One patient (4%) patient had a superficial dural tear which was repaired intra-operatively. However, the patient did not had complaints of post-operative hypotension or any further complications. There were no cases of superficial or deep infection in the present study. There was no case of root injury in the present study. The pre-operative Visual analog scale score was 6.24+/-2.34 which decreased to 4.18+/-3.42 at 6 months, 2.82+/-1.89 at the end of 12 and 1.72+/-0.65 at the end of 24 months respectively. The Mean Oswestry Disability Index questionnaire score was 63.83 pre-operatively which decreased significantly to 41.24 at 6 months' period, 31.12 and 19.18 at the end of 12 and 24 months respectively (P< 0.001). The Mean ODI score is used to assess the effect of low back pain on activities of daily living. It includes questions on ability to walk, sit, sleep, stand, pain intensity, employment/Homemaking, travelling, social life, Lifting and personal care (Eg. Washing, Dressing). For each possible section mentioned above (total 10), the total score is 5. If the first statement is marked, then the score is 0 and if the last statement is marked then the score is 5. The final score is calculated as Example: 16 (Total score)/50 (Total Possible score) X 100 = 32%
Discussion
The operative treatment for lumbar disc prolapse has evolved since Mixter and Barr suggested first described wide open posterior transdural approach for lumbar disc herniation in 1934 4 . Later in 1939, Love described the standard discectomy procedure involving the release of the nerve root which forms the basis of all decompressions till date [5] . Various techniques have been mentioned in the literature since then in order to reduce the soft tissue trauma namely microdiscectomy and Microendoscopic discectomy using extraforaminal or transforaminal approaches [6] . With these aforementioned techniques, the basic principles remain the same. The goal of surgery is to achieve sufficient decompression with minimal soft tissue injury. The functional long term outcome is what establishes the success of a particular procedure. More extensive surgeries are associated with complications such as epidural scarring and damage to the soft tissues with some amount of instability. In a large series from multiple centres across USA, the SPORT trial, a comparison was made between non-operative treatments versus surgery, the conclusion was in favour of surgery at 2 years' follow-up. Although, both the groups had substantial improvement, the patients who underwent surgery had greater improvement overall in terms of back pain and sciatica [7] . All the patients operated in the present study were given a 6 weeks trial of conservative treatment before the surgery. In a study by Katayama et al. [8] , blood loss during surgery was found to be more after standard decompression as compared to the microsurgical techniques. However, the blood loss did not affect the final outcome in any of the groups. Similar observations were reported by Huang et al. [9] in their study. There was not much of the intra-operative blood loss in any of the patients in the present study. A systematic review by Gotfryd and Avanzi [10] , compared the standard microdiscectomy with microdiscectomy and microendoscopic discectomy and found that the microdiscectomy and microendoscopic discectomy are superior to the standard discectomy in respect to the volume of blood loss, systemic repercussions and duration of hospital stay. However, there was no clinically significant difference in the terms of final outcome amongst all the three techniques. A recent meta-analysis11 compared the microendoscopic discectomy with the standard discectomy and found more studies showing higher rates of incidence in terms of dural tear, nerve root injury and recurrence along with limited field of vision in cases where microendoscopic discectomy was done as compared to the conventional surgery. However, there was no major statistically significant difference in long term follow up of patients in both the groups. Thus, microdiscectomy and microendoscopic discectomy has a longer learning curve initially as compared to the standard discectomy. In the present study, there was one patient with dural tear. The cause of that can be attributed to the adhered tissue and less invasive nature of the surgery. The present study, there was male preponderance with 64% involvement. Similar findings were reported by Pappas et al. [12] and Davis et al. [13] . Amongst the level involved, L4-L5 were the commonest in the present study (54%) which was similar to the studies by Pappas et al. [12] . The mean Oswestry disability index questionnaire is a simple and one of the most commonly used scoring system to assess the effect of low back pain in activities of daily living [14] . It is one of the most commonly used questionnaire system in spine surgery. There was a constant decrease in the score postoperatively which was statistically significant.
Limitations
Small number of sample size and shorter duration of followup remains the limitations of the present study.
Conclusion
Lumbar decompression surgery by standard posterior approach gives a good functional outcome in patients with lumbar disc prolapse and symptoms of radiculopathy without instability. | 2020-02-20T09:17:41.299Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "26e5db94ddf5a0d54fbdf219099bd8fc4679999e",
"oa_license": null,
"oa_url": "https://www.orthopaper.com/archives/2020/vol6issue1/PartH/6-1-34-474.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "35c46ce72a55a67eafdc2021005aca1a7cd8f0a6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4522494 | pes2o/s2orc | v3-fos-license | Helical tomotherapy to LINAC plan conversion utilizing RayStation Fallback planning
Abstract RaySearch RayStation Fallback (FB) planning module can generate an equivalent backup radiotherapy treatment plan facilitating treatment on other linear accelerators. FB plans were generated from the RayStation FB module by simulating the original plan target and organ at risk (OAR) dose distribution and delivered in various backup linear accelerators. In this study, helical tomotherapy (HT) backup plans used in Varian TrueBeam linear accelerator were generated with the RayStation FB module. About 30 patients, 10 with lung cancer, 10 with head and neck (HN) cancer, and 10 with prostate cancer, who were treated with HT, were included in this study. Intensity‐modulated radiotherapy Fallback plans (FB‐IMRT) were generated for all patients, and three‐dimensional conformal radiotherapy Fallback plans (FB‐3D) were only generated for lung cancer patients. Dosimetric comparison study evaluated FB plans based on dose coverage to 95% of the PTV volume (R95), PTV mean dose (Dmean), Paddick's conformity index (CI), and dose homogeneity index (HI). The evaluation results showed that all IMRT plans were statistically comparable between HT and FB‐IMRT plans except that PTV HI was worse in prostate, and PTV R95 and HI were worse in HN multitarget plans for FB‐IMRT plans. For 3D lung cancer plans, only the PTV R95 was statistically comparable between HT and FB‐3D plans, PTV Dmean was higher, and CI and HI were worse compared to HT plans. The FB plans using a TrueBeam linear accelerator generally offer better OAR sparing compared to HT plans for all the patients. In this study, all cases of FB‐IMRT plans and 9/10 cases of FB‐3D plans were clinically acceptable without further modification and optimization once the FB plans were generated. However, the statistical differences between HT and FB‐IMRT/3D plans might not be of any clinically significant. One FB‐3D plan failed to simulate the original plan without further optimization.
Generally, treatment planning software system (TPS) is an integrated software package that allows the target and organs at risk (OAR) definitions, management of treatment plan, plan optimization, and delivery quality assurance (DQA). It also includes the DICOM import and export and data management system application software for archiving and management of patient data. TPS such as Eclipse (Varian Medical Systems, Palo Alto, CA, USA), Tomotherapy (Accuracy Inc, Sunnyvale, CA, USA), Pinnacle (Philips Healthcare, Andover, MA, USA), RayStation (RaySearch Medical Laboratories, Stockholm, Sweden) have different dose calculation engines as well as other characteristics that are unique to each system. Furthermore, each TPS needs to be commissioned using beam data from the linear accelerator to be used for patient treatment delivery. For example, a treatment plan generated from TPS that is commissioned to Varian Clinac iX linear accelerator could not be directly used to treat with Varian TrueBeam linear accelerator. In summary, there is no easy way to transfer patient treatment plans between different TPSs without repeating a significant amount of work.
Due to the lack of interchangeability among TPSs, there is a need to develop a method that can automatically transfer patient plans from one treatment unit/TPS to another treatment unit/TPS. This is especially useful for treatment centers that have multiple treatment units and TPSs that want to switch patients due to, for example, scheduling conflicts and machine down time.
Recently, RayStation TPS developed several advanced features to generate backup treatment plans. 1
| 179
A FB plan was created by extracting information from a protocol plan generated using the FB module in RayStation TPS. The extracted information includes treatment planning parameters such as treatment techniques (3D, IMRT, or VMAT), beam geometry (gantry, collimator, couch, and other accessory settings), optimization parameters such as weighting factor of dose mimic between the target and the organ at risk (OARs). These parameters can be edited by the user and it is possible to test the FB protocol plan by using the dose mimicking technique to compare the FB plan and the original HT plan using a number of visual tools (i.e., dose volume histogram (DVH) curves, dose differences).
The precision of FB plan dose simulation is greatly related to the pregenerated protocol plan. The protocol plans can be used as a shared protocol plans such as tumor-specific protocol plans (lung, HN, prostate), treatment technique-specific protocol plans (IMRT, 3D, VMAT), energy-specific protocol plans (6 MV, 10 MV), beam angle-specific protocol plans (i.e., six field, seven field, or nine field), target position-specific protocol plans (i.e., head first or feet first).
The protocol plans also can be very specific used as a patient-specific protocol plan. A more specific protocol plan will result in a much higher degree of correspondence between the original HT plan and the resultant FB plan; however, a great deal of time and effort will be needed to generate these protocol plans.
2.C | Fallback plan creation
In this study, lung and HN IMRT FB plans shared the same single protocol plan for each patient with head first supine position and prostate IMRT FB plans shared another single protocol plan for each patient with feet first supine position. The protocol plan parameters used for all FB-IMRT plans included: nine field beams with fixed gantry angles of 40, 80, 120, 160, 200, 240, 280, 320, and 360 degrees; collimator angle of 0 degree; couch angle of 0 degree; and a static multileaf collimator (sMLC). The FB plan use dose mimicking optimization algorithm to optimize the Fallback plan. The goal of the dose mimicking optimization is to minimize the error in DVH between the reference plan (original HT plan) and the deliverable plan (Linac Fallback plan). Functions associated with OARs and targets are given a weighting factor equal to a user-defined target priority (Target/OARs ratios). In this study, the dose mimicking target/ OAR optimization weighting factor was set to 100.00 which means the importance of the optimization goal for target over OARs is 100.
Usually, the higher the ratio, the more importance for the target dose simulation and the lower the ratio, the more importance for the OARs dose simulation.
The energies of 6 MV were selected for lung and HN patients and 10 MV was selected for prostate patients. For FB-3D plans, patient-specific individual protocol plans were used. The plan parameters such as gantry, collimator, couch, and wedge angles for the FB-3D plans were determined individually and the final protocols selected were the ones that could best mimic the original HT plans.
For lung cancer patients, both FB-3D and FB-IMRT plans were evaluated. For HN and prostate cancer patients, only FB-IMRT plans were evaluated because IMRT treatment technique is the most commonly used treatment technique for HN and prostate cancer patients.
The quantitative evaluation of PTV dose distribution included: mean dose of PTV (D mean ), the PTV dose coverage R 95 where D x% is the dose to x% of the target volume, Paddick's conformity index (CI) 2 , and homogeneity index (HI). CI was defined by the following equation.
Where TV is the target volume, TV PIV is the target volume covered by the prescription isodose volume (PIV), and V PIV is the total prescription isodose volume. HI was defined by the following equa-
| RESULTS
The PTV dose coverage R 95 , D mean , CI, and HI from FB-IMRT and We noticed that only the DVH of PTV-66 had an acceptable agreement between FB-IMRT and original HT plans comparing single-target dose simulation (Fig. 6) and multitarget dose simulation
ACKNOWLED GMENTS
The authors acknowledge the assistance from Hancock, Carolyn, CMD in proof-reading the manuscript and giving valuable feedback.
F I G . 6. Example of DVH calculated from FB-IMRT and HT plans with one PTV target (PTV66 and Target/OARs optimization weighting factor = 100) for one HN patient. | 2018-04-03T00:31:13.465Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "525a91c3c86b767d4ab3bb4b2b841280155fa431",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "525a91c3c86b767d4ab3bb4b2b841280155fa431",
"s2fieldsofstudy": [
"Medicine",
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6024225 | pes2o/s2orc | v3-fos-license | The calcification of cartilage matrix in chondrocyte culture: studies of the C-propeptide of type II collagen (chondrocalcin).
We have shown that when chondrocytes are isolated by collagenase digestion of hyaline cartilage from growth plate, nasal, and epiphyseal cartilages of bovine fetuses they rapidly elaborate an extracellular matrix in culture. Only growth plate chondrocytes can calcify this matrix as ascertained by incorporation of 45Ca2+, detection of mineral with von Kossa's stain and electron microscopy. There is an extremely close direct correlation between 45Ca2+ incorporation in the first 24 h of culture and the content of the C-propeptide of type II collagen, measured by radioimmunoassay, at the time of isolation and during culture. Moreover, growth plate cells have an increased intracellular content of the C-propeptide per deoxyribonucleic acid and, during culture, per hydroxyproline (as a measure of helical collagen) compared with nasal and epiphyseal chondrocytes. In growth plate chondrocytes 24,25-dihydroxycholecalciferol (24,25-[OH]2D3), but not 1,25-dihydroxycholecalciferol alone, stimulates the net synthesis of the C-propeptide and calcification; proteoglycan net synthesis is unaffected. Together, these metabolites of vitamin D further stimulate C-propeptide net synthesis but do not further increase calcification stimulated by 24,25-(OH)2D3. These observations further demonstrate the close correlation between the C-propeptide of type II collagen and the calcification of cartilage matrix.
isolated by collagenase digestion of hyaline cartilage from growth plate, nasal, and epiphyseal cartilages of bovine fetuses they rapidly elaborate an extraceUular matrix in culture. Only growth plate chondrocytes can calcify this matrix as ascertained by incorporation of 45Ca2+, detection of mineral with von Kossa's stain and electron microscopy. There is an extremely close direct correlation between 45Ca2+ incorporation in the first 24 h of culture and the content of the C-propeptide of type II collagen, measured by radioimmunoassay, at the time of isolation and during culture. Moreover, growth plate cells have an increased intracellular content of the C-propeptide per deoxyribo-nucleic acid and, during culture, per hydroxyproline (as a measure of helical collagen) compared with nasal and epiphyseal chondrocytes. In growth plate chondrocytes 24,25-dihydroxycholecalciferol (24,25-[OH]2D3), but not 1,25-dihydroxycholecalciferol alone, stimulates the net synthesis of the C-propeptide and calcification; proteoglycan net synthesis is unaffected. Together, these metabolites of vitamin D further stimulate C-propeptide net synthesis but do not further increase calcification stimulated by 24,25-(OH)2D3. These observations further demonstrate the close correlation between the C-propeptide of type H collagen and the calcification of cartilage matrix.
T hE calcification of cartilage matrix in the growth plates of developing bones represents a fundamental requirement for bone growth and development. Recently, we identified a molecule with a 35,000 mol wt subunit in the fetal bovine epiphysis (4) that is present in the extracellular matrix in developing cartilages (19). When cartilage calcifies this molecule is focally deposited exactly where and when calcification occurs (19,20); we called this molecule chondrocalcin (19). Subsequent sequencing analysis of chondrocalcin has revealed that it is the C-propeptide of type H collagen (26). Whether or not the C-propeptide is an active participant in the calcification process remains to be seen.
We now describe comparative studies of this C-propeptide in cultures of chondrocytes isolated from calcifying fetal growth plate cartilage, and noncalcifying nasal and epiphyseal cartilages of the same fetus. We describe a culture system for the study of calcification and demonstrate that only growth plate cartilage matrix calcifies in culture, that calcification is closely correlated with the content of the C-propeptide of type H collagen, and that calcification and net synthesis of the C-propeptide are both stimulated by vitamin D.
Dr. Hinek is a visiting research scientist from the Pomeranian Academy of Medicine, Szczecin, Poland.
Isolation of Tissues
Bovine fetuses were obtained within 30 min of slaughter of pregnant cows from a local abbatoir (Abbateir Soulanges, Les Cedres, Quebec). They were immediately transported to the laboratory. Fetal age was determined by measurement of tibial or femoral length (17); it ranged from 101 to 218 d. Femora, tibiae, and humeri were aseptically removed. Longitudinal incisions were made through each epiphysis from the articular surface to the bony metaphysis with a fine-toothed metal saw. The metaphysis was separated from the lower hypertrophic zone of the growth plate at its natural fracture face just below the last transverse septum of the lower hypertrophic zone. The primary growth plate was carefully dissected away from the noncalcifying cartilagenous epiphysis; since there is no clearly defined boundary in younger fetuses, growth plate slices up to "~2-mm distant from the metaphyseal junction were isolated. In older animals, isolation of growth plate was more precise since the secondary center of ossification is well developed and enlarges with increasing age, with the primary and secondary growth plates sandwiched between it and the junction with the metaphysis. Ferichondrium and periosteum were removed circumferentially. Nasal septal cartilage and epiphyseal cartilage, remote from the primary and secondary centers of ossification, were removed for study at the same time. All tissues were placed in DME at room temperature and cut into small fragments "~1-3 mm 3.
with constant gentle stirring on a styrofoam pad (to prevent overheating) with a teflon-coated bar until extracellular matrix had been completely removed. This took from 4-6 h. Undigested cartilage was removed by passing the digestion mixture through three layers of nylon mesh (size 35 x 35 I~m; Nitex, Swiss Weaving Mills, Zurich). Ceils were separated by centrifugation at 100 g for 5 min at room temperature. They were washed twice by centrifugation in the culture medium. Cells were counted with a hemocytometer chamber in the presence of 0.5 % trypan blue in 0.85 % sodium chloride (Flow Laboratories, Inc., McLean, VA). The viability was between 93-97%. The inclusion of serum in the digestion medium was found to be essential for the retention of good viability.
Cell Culture. Cells were cultured in the complete enzyme-free tissue culture medium at a density of 2 x 106 cells in 1 ml per well of multiwell tissue culture plates (24 well, 1.5 cm diameter; Costar, Data Packaging Corp., Cambridge, MA) and maintained in a humidified incubator at 37°C in 5% CO: in air. Media were changed every third day. For each experimental determination triplicate cultures were established and means + SD were determined.
Detection of Calcification. Incorporation of 4SCaC12 (New England
Nuclear, Lachine, Quebec, Canada) into cells and cartilage matrix was measured in cultures by the addition of 0.25 ~tCi/ml at time zero and whenever the culture medium was changed. At the end of the incubation, culture medium was discarded and the intact cell layers were rinsed twice for 1 h at room temperature with fresh culture medium without serum. This medium was discarded. Cultures were air dried under vacuum (•1 h) at room temperature. They were solubilized by the addition of 0.5 ml of 90 % formic acid for 30 rain at 70°C before being mixed with 5 mi of scintillant (Ready Solvent; Beckman Instruments, Inc., Palo Alto, CA). Counting efficiency was found to be unaffected by the presence of this amount of formic acid.
DNA and Uronic Acid Assays. Cell layers were first digested at 37°C for 18 h with the bacterial collagenase (as used for cell isolation) at 0.5 mg/ml in 0.2 Tris-HCl, pH 7.0 containing I mM CaCI2. Digestion then continued for a further 12 h at 37°C by the addition of 5 mM EDTA (to inactive collagenase) and 1 mM dithiothreitol with 1 mg/mi papain. Each culture digest was then divided into tv~ equal parts: one was centrifuged at 100 g for 5 min and the supernatant was retained for uronic acid assay (2); the other half was used for the fluorimetric assay of DNA (25).
Hydroxyproline assay. This was as described (3). Only cell layers were analyzed.
Radioimmunoassay for the C-Propeptide of ~pe II Collagen
Tissue Extraction. Culture medium and cell layers were assayed separately.
Cell layers were first dried under vacuum overnight at room temperature then extracted with 100 I114 M guanidine hydrochloride, containing 0.1 M potassium acetate, pH 5.8 and the proteinase inhibitors phenylmethylsulfonyl fluoride, EDTA, pepstatin, and iodoacetarnide (22) at 4°C for 24 h.
Radiolabeling of C-Propeptide. 0.1 mg of C-propeptide isolated as described (4), in 100 ~tl of 50 mM Tris-HC1, pH 7.6 containing 150 mM sodium chloride was added to 0.5 mCi Nal2SI (New England Nuclear) in 10 ~tl 0.4 M phosphate buffer, pH 7.4 and 10 ~tl chloramine T at 600 I~g/ml in 50 mM Tris-HCl buffer, pH 7.6. The mixture was vortexed gently for 2 min at room temperature. Then 100 ~tl sodium metabisulfite was added as 1.2 mg/ml in the Tris-HC1 buffer. Free iodine was removed by chromatography in the presence of 200 p.1 sodium iodide (10 mg/ml) in the Tris-HC1 buffer containing 2 mg/mi of BSA. The column used was a siliconized 10-ml pipette containing a 10-ml bed volume of Sephadex G-25 (Pharmacia, Montreal, Quebec). Siliconization was necessary since the C-propeptide binds to untreated glass and polystyrene surfaces. Hence polypropylene pipettes and tubes were used for sample storage.
Radioimmunoassay. A solution phase inhibition assay was used in which
the binding of rabbit antibody to radiolabeled C-propeptide is competed for by known amounts of unlabeled C-propeptide or an unknown amount of 1. Abbreviations used in this paper: 1,25-(OH)2D3 and 24,25-(OH)2D3, 1,25-dihydroxycholecalciferol and 24,25-dihydroxycholecalciferol, respectively.
C-propeptide (to be assayed). The immune complex that is formed is then bound to protein A-bearing Staphylococcus aureus, removed by centrifugation, and counted. The buffer was composed of 7.5 mM potassium dihydrogen phosphate, 143.4 raM disodium hydrogen phosphate at pH 8.0, containing 2.5% BSA, 5% sodium deoxycholate, 2.5% NP-40, and 0.05% sodium azide. 10 I.tl oft.lssue extract in 4 M guanidine hydrochloride diluted 10-fold with buffer, or 10 ~tl undiluted culture medium, or 10 ~tl of purified C-propeptide in buffer were added to 50 Ixl of rabbit antiserum to the C-propeptide (RI17; references 4 and 19) previously diluted with buffer so that it binds 40-50% of the total (10,000 epm) radiolabeled C-propeptide added in 50 I~1 of buffer. After mixing, the solution was left overnight at 37°C. 50 ~tl of protein A as a formalin-killed preparation of Staphylococcus aureus, Cowan strain I was added (10% suspension in water diluted 2.5-fold with buffer; Zymed, Cedarlane Laboratories, Hornby, Ontario) and mixed. Total counts were determined. After 20 min at room temperature 2 ml of buffer were added. After centrifngation at 5,000 g for 10 rain, supematants were aspirated, and pellets were counted. A standard curve was constructed for all assays. Polypropylene tubes (12 x 75 mm; Fisher Scientific, Montreal) were used throughout together with polypropylene pipettes. All assays were performed in triplicate. Nonimmune binding of C-propeptide was determined by substituting immune for similarly diluted nonimmune rabbit serum. Counts of nonimmune binding were deducted from those for immune binding and the percentage inhibition of binding was determined. The presence of dilute guanidine hydrochloride and proteinase inhibitors was shown to have no effect on antibody binding in this assay. A typical inhibition curve is shown in Fig. 1. Unless otherwise stated, results of all biochemical and radioimmunoassays represent the means + SD of triplicate determinations on each of triplicate cultures.
Electron Microscopy
For morphological examination, cultures were fixed as described above for histology and then postfixed in 1% osmium tetroxide in 0.1 M sodium cacodylate, pH 7.4. Tissue was dehydrated in graded ethanols and embedded in Spur resin (Polysciences, Inc., Warrington, PA). Ultmthin sections were stained with umnyl acetate and lead citrate and examined in a Phillips 400 electron microscope.
Histology, lmmunohistochemistry, and Electron Microscopy of CeUs and Cell Cultures
Examination of electron micrographs of freshly isolated cells from growth plate, nasal, and epiphyseal cartilages revealed the presence of isolated healthy looking chondrocytes that were free of morphologically recognizable cartilage matrix (data not shown). In culture, all chondrocytes rapidly produced an extracellular matrix in which the cells were embedded. The appearance of a growth plate chondrocyte culture is shown in Fig. 2. A "mat" of cartilage was formed that was firm enough for it to be removed intact with forceps from the culture vessel after 3 d. Hypertrophic cells were generally larger. Staining with von Kossa's reagent revealed the presence of mineral deposits (Fig. 2); these were only observed in growth plate cultures. They were of irregular shape and scattered throughout the extracellular matrix. Sites of calcification were clearly recognizable by electron microscopy in cultures of growth plate chondrocytes by their crystalline appearance and considerable electron opacity (Fig. 3). All these experiments were repeated several times with similar results. Several comparative studies were made of the net synthesis of the C-propeptide in these cultures, all of which produced similar results. In Fig. 5, a typical set of data for a 134-d-old fetus is shown. The FCS used in this study already contains significant amounts of the C-propeptide (time 0, Fig. 5) (0.82 Ilg/ml serum) as suggested by an earlier study (4). During the culture of growth plate cells, the total C-propeptide content in the cell layer progressively increased while the total content in culture medium remained essentially unchanged (Fig. 5). A smaller increase in the total C-propeptide content of the cell layer was also observed in cultures of nasal and epiphyseal chondrocytes (Fig. 5). The content of C-propeptide in culture media of nasal and epiphyseal cells exhibited a small decrease after several days in culture. The C-propeptide content of freshly isolated growth plate chondrocytes was always greater per cell than that of nasal and epiphyseal chondrocytes (Fig. 5); this was also the case when expressed per DNA (Table I). The total C-propeptide contents of cultures (cell layer plus medium) per DNA is shown for all three cultures in Fig. 6. Growth plate chondrocytes accumulated more C-propeptide than other cultures and this was retained primarily in the cell layer (Fig. 5). The amount of immunoreactive C-propeptide rapidly increased in the first 4 d in growth plate cultures whereas a small overall decrease per DNA was observed in nasal and epiphyseal cultures (Fig. 6). Table II. tuses of various ages and recording the 45Ca2+ incorporation during the first 24 h of culture. Table II shows that with increasing fetal age, the C-propeptide content of growth plate chondrocytes increased in parallel with 45Ca2+ incorporation. The correlation coefficient of r = 0.95 for paired samples demonstrates the extremely close correlation between C-propeptide content and calcification (Fig. 7 bottom).
Synthesis and Accumulation of Uronic Acid and Hydroxyproline
Uronic acid as a measure of chondroitin sulfate and hence proteoglycan content was also determined in the cell layers. In all cultures of the same 134-d-old fetus, the uronic acid The results represent the means + standard deviations of C-propeptide and of 45Ca2+ incorporation in triplicate cultures. There are two fetuses each of 210 d of age. Regression analysis of paired samples revealed that the correlation coefficient (r = 0.95; y = 3.765 + 0.028x) is significant by Students' t test (P < 0.005). This plot is shown in Fig. 7 The uronic acid contents of these cultures per DNA content.
content was increased and was greatest (as was growth rate) in epiphyseal cultures (Fig. 8a). On a DNA basis some decline per cell content was observed in epiphyseal cultures with time but little change was seen in other cultures (Fig. 8 b). Similar amounts of hydroxyproline (as a measure of helical collagen) in the cell layer per unit DNA were found in cultures of nasal and growth plate chondrocytes, but more was present in epiphyseal cultures. With time, the hydroxyproline content exhibited a reduction in all cultures (Fig. 9 a). The content of C-propeptide in the cell layer in relation to hydroxyproline content in a 160-d-old fetus revealed that the C-propeptide content was always greater in growth plate cultures at all times during the experiment (Fig. 9 b). To determine the molar ratios of C-propeptide to helical collagen, hydroxyproline was determined as representing 11.2 % of the total weight of helical collagen, which has a molecular weight of ~300,000 (14). The molecular weight of the C-propeptide was taken as 105,000 (6). With this information, the molar ratios of helical collagen to C-propeptide at 12 h were calculated, from the data shown in Fig. 9 b, as follows: 883:1 (growth plate), 2,301:1 (nasal), and 1,805:1 (epiphyseal). At Figure 10. Total C-propeptide contents (cells plus medium) per microgram DNA of growth plate, nasal, and epiphyseal chondrocytes isolated from a 124-d-old fetus and cultured in the absence (c) and presence of 24,25-(OH)2D3 (24,25), 1,25-(OH)2D3 (1,25) and the two metabolites in combination at the same concentrations (DD). Standard deviations are not shown (<5 % of mean) but significant differences (student's t test analyses, P < 0.001) are indicated (*). 5 d these ratios were 654:1 (growth plate), 1,021:1 (nasal), and 3,287:1 (epiphyseal). Thus we can only account for a very small proportion of the C-propeptide, presumably synthesized as part of the procollagen molecule. This is probably a result of its degradation to fragments which are not detected by our antibodies. The increased content of C-propeptide in growth plate cultures may therefore be due to increased synthesis and/or reduced degradation of the molecule, possibly as a result of its association with mineral (4,19). Whether there are any differences in the synthesis and posttranslational processing of procollagen and the C-propeptide in growth plate chondrocytes remains to be established. In spite of the small amount of C-propeptide present in these cultures, the results together demonstrate that growth plate chondrocytes contain and accumulate increased amounts of C-propeptide with respect to helical collagen. Moreover, the contents of C-propeptide closely correspond to 45Ca2+ incorporation, used here as a biochemical index of calcification.
The Effect of Vitamin D on the Net Synthesis of C-Propeptide and 45CaZ+ Incorporation in Cell Cultures
It is well established that vitamin D deficiency is associated with an arrest of cartilage calcification in the growth plate. Although fetuses were removed from vitamin D-sufficient cows, we decided to determine whether two metabolites of vitamin D, namely 1,25-(OH)2D3 and 24,25-(OH)2D3, had any influence on the metabolism of our cell cultures. Addition of 24,25-(OH)ED3 caused a small but significant (P < 0.001) increase in net C-propeptide synthesis in growth plate cultures but no significant changes were observed in nasal and epiphyseal cultures (Fig. 10). On its own 1,25-(OH)2D3 had no effect on either cell population. Together these metabolites produced a significant enhanced accumulation of C-propeptide in growth plate cultures compared to that observed with 24,25-(OH)2D3 alone.
24,25-(OH)2D3 produced a significant increase (P < 0.001) in 45CaZ+ incorporation in growth plate but not nasal cultures, but addition of 1,25-(OH)2D3 alone or in combination had no effect on 45Ca 2+ incorporation in growth plate and nasal cultures except a less significant (P < 0.05) stimulatory effect on growth plate cells at day 3 of culture ( Fig. 11). Although these stimulatory effects of 24,25- Figure 1l. Incorporation of 45Ca2+ per microgram DNA in cultures of growth plate and nasal chondrocytes isolated from the same 124-d-old fetus referred to in Fig. 10 and cultured with vitamin D metabolites as in Fig. 10. Standard deviations are not shown (<5 % of mean) but significant differences (student's t test analyses) are indicated: *P < 0.001; **P < 0.05.
(OH)2D3 alone or in combination with 1,25-(OH)2D3 were small, they were reproducible in three experiments each with a different fetus. The lesser effect of 1,25-(OH)2D3 on 45Ca2+ incorporation shown in this experiment was not reproducible. To determine whether these metabolites influenced the proteoglycan contents of cultures, we examined uronic acid contents. In repeat experiments there was no reproducible significant influence by either metabolite singly or in combination on uronic acid contents of nasal and growth plate cultures. A typical experiment is shown in Fig. 12.
Discussion
We have shown previously with immunohistochemistry at the light and ultrastructural level that the calcification of cartilage matrix is intimately associated in space and time with the focal concentrations of the C-propeptide of type II collagen, previously called chondrocalcin (19). The present biochemical and immunochemical studies confirm and extend these observations.
In this investigation we have successfully established an in vitro system for the study of matrix synthesis by isolated chondrocytes and, in particular, the calcification of cartilage matrix by chondrocytes isolated from growth plate cartilage. By comparison with cultures of chondrocytes isolated from noncalcifying cartilages, the characteristics of this calcifying system have been identified. The usefulness and sensitivity of 45Ca2+ incorporation as an index of natural calcification is clearly demonstrated with respect to morphologically and histochemically detectable calcification. The striking direct correlation between C-propeptide content and calcification is readily apparent, further implicating this molecular species in the process of calcification. | 2016-05-15T16:07:17.370Z | 1987-05-01T00:00:00.000 | {
"year": 1987,
"sha1": "10a3b3e89ffa1ec08e2952ce23004db00ec0d5cd",
"oa_license": "CCBYNCSA",
"oa_url": "https://rupress.org/jcb/article-pdf/104/5/1435/1054548/1435.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "935d877a1089af73f445255656e5aa448d246e51",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
53052058 | pes2o/s2orc | v3-fos-license | Social Boundaries of Appropriate Speech in HCI: A Politeness Perspective
This position paper presents the view that there may be social boundaries in appropriate speech in HCI. While previous research has examined that humanlike voices may not always be appropriate for computers to use, the same may hold true for linguistic concepts that have noticeable interpersonal and social functions. This paper examines the social functions of linguistic politeness to approach the issue of appropriate language use in spoken HCI, and the relationship between voices and language in this interaction space. Several studies exploring politeness in spoken HCI are discussed, with the view that it and other social talk may have different limitations in HCI, and that politeness itself may have to be subsequently reconsidered
INTRODUCTION
Speech contains a wealth of interactional complexities that build and maintain the way people see each other in terms of power, identity, and personality (Cameron, 2001;Coulthard, 2013;Goffman, 2005).What we say and how we say it can impact on how we are perceived by others.This can also be seen in spoken human-computer interaction (HCI).However, despite the increasing prevalence of computer speech, there is still debate as to how we may want computers to speak to us.Moore (2017a), for example, argues that the frequency of humanlike versus robotic voices in computer speech is not necessarily appropriate.This is due to people expecting that systems have more advanced capabilities than they do in reality.This can create a mismatch between users" mental models of what a system is capable of and the reality of its limitations (Cowan et al., 2017).
As well as with voice, there remain questions as to what could be considered appropriate language in computer speech.There are voices that are considered humanlike and machinelike or robotic.While all language is arguably humanlike, some concepts may be more appropriate for computers to use than others or be considered "machinelike".This paper uses the concept of linguistic politeness as a lens to examine the possible social boundaries in computer language use during spoken interaction, and the relationship between language and voice in perceptions of computer speech.Subsequent challenges surrounding concepts of gender, vocal qualities, appropriateness, trust, and interaction contexts are discussed alongside reconsidering politeness in HCI contexts.
POLITENESS IN HCI
While there is wider debate around what constitutes politeness (Locher & Watts, 2005) and its limitations outside of Anglo-centric countries, this paper looks at examples of positive and particularly negative politeness strategies as described by Brown & Levinson (1987).These strategies are often linked to the concept of facethe public self-image we present to others during social interaction (e.g.Goffman, 2005).In face theory, it is often in the best interest of interlocutors to save face during interaction.This means avoiding damage to one"s own or another"s selfimage.This is achieved through a dynamic understanding of what is acceptable and unacceptable communication at any given moment.This understanding can be impacted by various societal and cultural norms, which are themselves subject to change.Brown & Levinson (1987) described positive facethe desire to be liked and approvedand negative facethe desire to not be imposed upon by others.Accordingly, they describe positive and negative politeness strategies that speakers use to conduct face-saving in interaction.Positive politeness strategies include approval, attention to a listener"s wants and desires, and indication of group membership between interlocutors.Negative politeness strategies often refer to minimising the imposition made on others, for example by using indirect instructions or requests to others.
In HCI research, the use of politeness strategies has had mixed responses from participants.Wang et al. (2008) demonstrated that polite pedagogical agents used in the classroom may lead to positive improvements in learning outcomes.More positive effects were observed in a study where participants observed interactions between people and robot instructors.Torrey, Fussell & Kiesler (2013) observed improvements in perceptions of likeability and considerateness, as well as reductions in how controlling a robot was perceived to be, when discourse markers and hedges were used.This experiment design was expanded upon, where the interaction distance between robot and participant was further explored to compare direct and indirect interactions, and observations of interactions with polite robots (Strait, Canning, & Scheutz, 2014).In this study, the authors found the same improvements in perceptions were only observed in third-person interactions, similar to the observations described by Torrey et al. (2013).Direct interactions with polite robots did not show the same effects.
Positive improvements have also been observed in the wild.While politeness as a concept may not have been the direct focus of their research, a large supermarket chain used politeness strategies when updating the language of a self-service checkout in (Clark, 2016).For example, the phrase "Approval needed" was replaced with "We just need to approve this".The use of the hedge just can be seen to minimise the perception of what constitutes the approval, and the potential time it may take to accomplish.While perhaps not entirely a result of employing politeness strategies, improvements in people"s user experience with the checkout systems were noted.
Although mixed responses were observed when using politeness in HCI, factors contributing to these responses were not always made clear.Two further studies provided more insight into perceptions of politeness in HCI, in which participants interactions with computer instructors were examined during model building tasks.Negative politeness strategies were used in the form of vague language (Channell, 1994;Cutting, 2007).The first study saw polite instructions being perceived as inappropriate in synthesised speech instructions compared to non-polite instructions, which were perceived as relatively normal and expected (Clark, Bachour, Ofemile, Adolphs, & Rodden, 2014).The second study showed a marked improvement in how polite instructions were received when used in human-recorded instructions for a computer interface, compared with synthesised speech (Clark, Ofemile, Adolphs, & Rodden, 2016).The polite recorded voice was perceived as more appropriate in using polite instructions than two synthesised voices.However, there were still noticeable limitations in its appropriateness.Even with the human recordings, participants commented on how the interface was still "just a machine" and wasn"t capable of using politeness in the same way as other people.
APPROPRIATENESS AND SOCIAL BOUNDARIES
These findings raised the question of possible boundaries with what is considered appropriate computer speech.While participants in the studies could not always explicitly identify the language that was causing the negative reactions, they could identify a disparity between the language used and the interface that was delivering it.The expectations of appropriate and acceptable language use of a more robotic sounding voice appeared to be more limited than a humanlike voice.This is similar to the gap between reality and expectations observed when comparing roboticsounding and human voices (Moore, 2017a), and may represent somewhat of a verbal uncanny valley resulting in a sense of unease.
However, given the mixed reactions towards even the humanlike voice using politeness, there may be further limitations to appropriate spoken communication in HCI.As Moore (2017b) discusses, there may be limits as to what linguistic interactions can take place between humans and machines as unequal partners.While a human voice may help blur boundaries between expected human speech and expected computer speech, and act as a bridge between identities of sorts, there was still somewhat of a clash of users" mental models in the expectation of linguistic capabilities.What is expected and possible of a human may not automatically transfer to a computer.
Differences in social rules
Considering the link between the use of politeness strategies and face management, we may have to consider that the idea of face in HCI is not the same as in human interaction.This may also be true for social rules of communication.While people have been observed to attribute politeness to computers (Nass, Steuer, & Tauber, 1994) and formulate polite responses towards them (Large, Clark, Quandt, Burnett, & Skrypchuk, 2017), responses to politeness particularly in instructionbased contexts are varied.This is to be expected to an extent, as responses to language are also diverse in human communication.However, it may be the case that face threats do not carry the same weight in HCI as they do in human communication.Although a mutual acknowledgement of selfimages and potential face threats is said to exist between people, the same might not be true between people and computers.Users still have an autonomy that can be imposed upon, and there are stakeholders involved (often commercial) for the computer, but likely not to the same extent as with other people.
RETHINKING POLITENESS IN HCI
Politeness in human communication is debated.Some see it as too focused on the polite end of a much larger polite-to-impolite spectrum and consider impolite language just as important in interaction.This is described, for example, in the concept of relational workall aspects of interaction that contribute towards building and maintaining interpersonal and social relationships (Locher & Watts, 2005).Politeness research in HCI has often focused on the use of negative politeness strategies.However, while researchers may benefit in expanding the concept to include all elements of the (im)polite spectrum, there are still considerations for interpersonal and rapport management through language in spoken HCI that differ from human interaction.
Third party involvement
Firstly, there are third party stakeholders at play, such as designers, that are involved in interactions to varying degrees.This can include choosing all system output explicitly, for example with the limited phrases used by self-checkouts or allowing for output to be determined from a bank of possible utterances.In human-human interactions, the explicit creation of linguistic output for others does exist (e.g.speech writers for politicians or business figureheads).However, this is less common than in HCI.The involvement of a third party during speech-based HCI may impact upon the evaluation of relational work and social talk like politeness strategies.The use of humour, for example, may be attributed to a system developer or company rather than the system itself.Devices may also be associated with companies and their respective reputations that are perceived by individuals may influence users" perceptions of computer language use.
Ownership and personalisation
Secondly, there is the aspect of ownership to consider.People can interact with systems like automated checkouts that are based within a private business.They can also interact with IPAs that are part of their personal property, be it on a smartphone or a home-based intelligent assistant.There may be a personal sense of attachment to a smartphone, for example, or a distrust of devices placed within corporate spaces.This also ties in with the concept of personalisationbeing able to customise a device to one"s own preferencesthe availability of which may depend on whether or not the user owns the device in question.Being able to alter characteristics such as the gender of a computer"s voice may alter the way in which language is perceived by users.
FUTURE WORK AND CHALLENGES
In reconsidering politeness and other social talk in HCI, and the factors that affect its use and perception during interaction, a number of challenges arise for future research.These are briefly outlined below.
Gender and voice quality
One feature often absent from research discussed in this paper is a comparison between politeness in male and female synthesised speech.Some research suggests politeness is more common amongst female speakers than male speakers (Lakoff, 2004), though this stereotype has been challenged over the years (Mills, 2005).Nevertheless, the ability to personalise gender of voices in HCI with some devices presents an interesting avenue of explorationexpectations and perceptions of politeness may be affected by both a system"s perceived gender and that of the user.This may be confounded by the availability of choice.The relationship between language and a wealth of other vocal qualities also presents itself as further areas of research, both within and outside of device personalisation.Theories including the similarity-attraction effect (Nass & Lee, 2000) could be tested alongside language use, focusing on qualities such as accents and dialects.Similarities between users and systems may affect the perception of concepts like politeness in speech-based HCI.
Measuring appropriateness and trust
The measurement of appropriateness of politeness and social talk in HCI remains another challenge.Moore (2017a) provides some examples of how appropriateness may be measured with computer voices.However, language use is arguably a more complex affair.Focusing on specific linguistic concepts (e.g.politeness, face, vague language) would likely prove an easier way of analysing appropriateness, as opposed to the essentially infinite utterances available.The link between appropriateness of language and trust would also prove useful, particularly in contexts that require systems to interrupt their users and direct the flow of conversation.In such instances, trust of would be of great importance.Understanding what linguistic concepts would influence this would be highly valuable.Further experiments can test the effect of politeness and other social talk during system-led interruptions, and how this may impact user perceptions of appropriateness and trust of information and systems in these contexts.
Moving beyond instructions and 'one-shot' interactions
The appropriateness of politeness may lie in the context of use.Indeed, exploring a wider array of contexts is essential given the unprecedented level of available spoken interactions with computers.A number of the studies discussed in this paper investigate instruction or advice-giving contexts, usually with static interfaces or robots.The effects of social talk in other contexts, including mobile contexts and more collaborative environments where users and computers operate more at a peer level, would provide a richer understanding of how people like to be talked to by computers.
Furthermore, how we like to talk to computers ourselves should also be investigated with regards to social talk.Prior research has observed people using politeness strategies and vague language towards computers (Large et al., 2017), though it is less clear why this occurs.For instance, it may be the case of lexically aligning (i.e.matching the same language) with computers (Branigan, Pickering, Pearson, & McLean, 2010) but there may be further questions as to how beliefs about a system affect it occurring (Branigan, Pickering, Pearson, McLean, & Brown, 2011).
The research discussed in this paper is also relatively short-lived.While speech interfaces like IPAs are still relatively unused (Cowan et al., 2017) we nevertheless have a greater potential of using them across a longer period of time than in previous years.Longitudinal studies are few in this area of research, and interactions with other concepts including appropriateness and trust would be helpful for considering linguistic aspects of system design.
SUMMARY
This paper presents the view that there may be social boundaries to appropriate use of linguistic concepts such as politeness in spoken HCI.While negative politeness strategies, for example, have been perceived more positively when used by human versus synthesised voices, this picture is incomplete.The very nature of being a computer may limit its ability to appropriately and capably employ certain linguistic concepts that are inherently social.Using politeness as a lens, this paper reflects on the possible social boundaries surrounding appropriate computer speech.Considerations for rethinking politeness and social talk are presented, alongside challenges surrounding future work around gender, voice quality, appropriateness, trust, and interaction context.Future experiments are planned to assess these concepts.These will include developing a bottom-up understanding of what users expect from speech interfaces in terms of language and voice, and the how this is affected by different interaction contexts.Furthermore, users" interactions with a speech interface that employs politeness strategies will be assessed to investigate how synthetic and human speech affects people"s alignment to politeness strategies.These experiments will further the body of knowledge of politeness and social communication with speech interfaces, and further explore where boundaries of humanlike and machinelike communication may lie. | 2018-10-03T17:41:51.091Z | 2018-07-01T00:00:00.000 | {
"year": 2018,
"sha1": "d465f3e1f5fc72a64094a5875c9e91f2c7fa26f4",
"oa_license": "CCBY",
"oa_url": "https://www.scienceopen.com/document_file/cddbab21-408b-4138-a303-713aacdce865/ScienceOpen/BHCI-2018_Clark.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "7f068705177e4aab9ebdff77736862b667d2a447",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": [
"Sociology"
]
} |
214372885 | pes2o/s2orc | v3-fos-license | Further development of Asian regionalism: institutional hedging in an uncertain era
ABSTRACT Currently, the confrontation between two global giants, the United States and China, in trade and technology advancement and hegemony in international politics is escalating. The possibility of a Sino-U.S. economic “war,” or the so-called “new Cold War,” not only indicates the escalation of this confrontation but also symptomizes the international order’s transformation as a result of the change in power balance and rise of a challenger against the existing United States–led international liberal order. Most IR specialists focus on the prospects of this confrontation and its uncertain worldwide circumstances and are concerned about its impact on East Asian/Asia Pacific regional circumstances. Among them, prospects regarding regionalism and regional institutions in Asia seem pessimistic. However, Asian regionalism was activated following the decline in United States’ power and rise of China as a global power, and the international liberal order’s retreat became visible toward the end of the 2000s. Furthermore, even under the uncertain situations created by the Sino-U.S. confrontation, regional powers, including China, Japan, and the Association of South East Asian Nations (ASEAN), are promoting their multilateral approach by proposing and advancing various regional frameworks. This indicates that each regional power is adopting the “institutional hedging” strategy to ensure that their individual interests are satisfied and the regional order is comfortable for themselves. This paper verifies that regionalism and regional institutions have become important as measures of regional power for countries’ institutional hedging strategies to overcome the challenges posed by the beginning of regional uncertainties and that Asian regionalism is more active today than ever before.
Introduction
Today, in the international scenario, the confrontation between two global giants, the United States and China, is escalating in terms of the advancement of trade and technology and hegemony in international politics. The possibility of occurrence of a Sino-U.S. economic "war," or the "new Cold War", 1 indicates not only the escalation of this confrontation but also a symptom of transformation of the international order as a result of the change in power balance and rise of a challenger to the existing United CONTACT Mie Oba mieo0715@icloud.com Falucty of Engineering, Tokyo University of Science, Tokyo, Japan 1 For example, Kaplan, "A New Cold War has begun." States-led international liberal order. These confrontational and uncertain circumstances in the world and in the East Asian/Asia Pacific region are attracting the attention of IR specialists focusing on this region.
Diverse evaluations have been performed on the significant impact of Asian regionalism on international politics. Realists originally tend to provide a limited evaluation of the role of international institutions as autonomous actors and believe them to be simply acting out the power relationship. 2 Further, the predictions of some scholars on Asian regionalism in the current unclear situation, which exhibits escalating power rivalry, are pessimistic. 3 However, regionalism, or regional multilateralism in Asia is becoming a significant aspect of the international relations maintained by this region. Regionalism in Asia developed toward the end of the Cold War, with the creation of various regional institutions for nearly two decades after the 1980s. Against the backdrop of the changing regional circumstances caused by the escalation of the Sino-U.S. rivalry, the promotion of regionalism in Asia has become more active since the end of the 2000s. Currently, several world powers, including the United States and China; middle powers, like Japan; and some small powers, such as the Association of South East Asian Nations (ASEAN) member countries, are promoting their regional multilateral approaches by proposing and advancing various regional frameworks. The development of regional frameworks, for example, the TPP/CPTPP, Regional Comprehensive Economic Partnership (RCEP), AIIB, Belt and Road Initiative (BRI), and existing ASEAN-led regional architecture, indicate that each regional power is adopting the "institutional hedging" strategy to ensure that its interests are met and the regional order is sufficiently conductive to realizing its political and economic aims. This paper examines how and why regionalism and regional institutions became important aspects of international relations in Asia for nearly two decades following the end of the Cold War and how and why they have recently become significantly active in response to the uncertain international circumstances caused by changes in or erosion of the existing international and regional orders. Regarding the former, this paper points out that the United States-led international liberal order fostered the development of regionalism in Asia. Regarding the erosion of the existing international and regional orders, this paper argues that uncertain circumstances caused by the decline of the international liberal order motivated regional countries to promote institutional hedging, which resulted in the further activation of regionalism.
Accordingly, first, this paper traces the development of regionalism and regional institutions for two decades since the end of the Cold War. Further, it reveals that the international liberal order, which had previously been led by U.S. hegemony, realized the circumstances in which regionalism and regional institutions developed in Asia for nearly two decades and led to the establishment of a multilayered "ASEAN-led" regional institutional structure and other regional institutions. 4 Second, this paper discusses the recent trends in Asian regionalism and regional institutions against the background of the changing power balance between the United States and China, escalation of the Sino-U.S. rivalry, and retreat of the United States-led international liberal order. Third, the paper highlights the necessity of providing new concepts and perspectives to understand the complex aspects of the recent development of regionalism and regional institutions in Asia. In addition to examining the "contested multilateralism" perspective, the paper discusses the international hedging concept. Furthermore, it points out that, along with the escalating power rivalry, the behaviors and reactions of middle and small powers will be strong determinants of the trajectory of development of regionalism and regional order. Finally, the paper discusses the prospects for Asian regionalism and the regional and global orders against today's uncertain global circumstances.
Era of new regionalism in Asia
During the Cold War era, a few regional institutions at the governmental level were active in Asia. The Economic Commission for Asia & the Far East, which was the first regional institution established in Asia (in 1947), reflected the development of Pan-Asian regionalism; however, it lost its initial influence following the mid-1960s, which marked the establishment of the Asian Development Bank (ADB). Although the ADB has been functioning as a regional international bank and promoting economic and social development in Asia, its area of responsibility has been limited to the provision of financial support for the economic and social development of Asian countries. Later, the ASEAN was established in 1967. Initially, it contributed to the development of the institutional core of Asian regionalism; however, later, the expectations among political and economic regional power elites regarding this regional institution declined, and the ASEAN's activities were limited to maintaining its image as the "political symbol" of unity among "like-minded" Southeast Asian countries.
After the 1950s, regionalism flourished in Europe and, in the 1960s, it spread to other developing regions, including Asia. Subsequently, regionalism seemed to be inactive in the world instead the deepening of the worldwide economic interdependence, not regionwide. However, after the mid-1980s, regionalism was revitalized in Europe, and it spread to North America. These events caused the activation of the orientation of regionalism in Asia and strengthened the argument for the necessity and possibility of establishing government-level regional institutions for economic cooperation and coordination in Asia, under the banner of "Pacific" or "Asia-Pacific" cooperation. Consequently, the Asia-Pacific Economic Cooperation (APEC) was established in November 1989, immediately after the collapse of the Berlin Wall. Subsequently, the activation of regionalism in many regions began to be called "new regionalism." 5 The institutionalization of Asia was extensively promoted in the 1990s and 2000s following the end of the Cold War. In addition to the ADB, ASEAN, and APEC, many regional institutions were established during this period. The institutions established in this era can be categorized into two types: First, the ASEAN's position was upgraded in Asian regionalism and ASEAN-led regional institutions was developed. The ASEAN enlarged its membership by affiliating with Indo-China countries, such as Vietnam, Laos, Cambodia, and Myanmar, which used to be considered origins of the threat to peace in Southeast Asia. By helping nations to overcome the serious damages caused by the Asian financial crisis in 1997-1998, the ASEAN advanced its efforts to promote regional integration and cooperation and, finally, signed the ASEAN Charter in 2008. Furthermore, the number of ASEAN-led regional institutions increased, and these institutions helped shape the ASEAN-centered regional architecture, which comprises the ASEAN Regional Forum (ARF), ASEAN+3 (APT), East Asian Summit (EAS), and ASEAN Defense Ministers Meeting plus (ADMM+). The ASEAN member countries have attempted to politically influence the East Asian/Asia-Pacific regional order by making and using such ASEAN-led regional institutions. Hence, they emphasize the importance of maintaining "ASEAN centrality" in their discourses.
The second type of institutions included some non-ASEAN-led regional institutions that were developed after the end of the Cold War. The institutions belonging to this category are free from ASEAN centrality, although some of them support some ASEAN member countries. The most remarkable institution in this category is the Shanghai Cooperation Organization (SCO), which was established based on a joint initiative by China and Russia. Some other regional institutions encompassing South Asia or parts of it were launched, as well. In addition to the South Asia Association of Regional Cooperation (SAARC), which established in 1985, the Indian Ocean Rim Association for Regional Cooperation (IOR-ARC) and Bay of Bengal Initiative for Multi-Sectoral Economic Cooperation (BIMSTEC) were established during the latter half of the 1990s. Today, the IOR-ARC has been transformed into the Indian Ocean Rim Association (IORA). Further, during the early 2000s, under the then Thai Prime Minister Thaksin's initiative, the Asian Cooperation Dialogue (ACD) was established. 6 As discussed by many scholars, the regional institutionalization in Asia was not a high-level one. Katzenstein argued that "Asian regionalism was characterized by dynamic developments in markets rather than by formal political institutions." 7 He also argued that Asian regionalism eschewed "formal institutions" and that the characteristic of Asian regionalism was "soft regionalism" compared to the European "hard regionalism," which was "based on politically established discriminatory arrangement." 8 Although Katzenstein made his argument approximately 20 years ago, a lack of strict and highlevel institutionalization remains one of the main characteristics of Asian regionalism and regional institutions to this day. Although informalism had many advocates in post-Cold War period; however, the periods witnessed significant development of regionalism with institutional frameworks.
The fundamental question is why regionalism became active specifically two decades after the end of the Cold War. The activation of Asian regionalism was a part of the new regionalism wave. This upsurge of regionalism resulted from the predominance of the liberal international order of the post-Cold War era. From the perspective of advocates like John Ikenberry, the liberal international order is "the order that is relatively open, rule-based, and progressive." 9 It refers to the United States-led hegemonic order. 10 The 6 The Thaksin administration's policies reflecting the regionalism efforts led by Thailand are very interesting; their results should be examined in greater detail to clarify the development of regionalism in Asia. 7 Katzenstein and Shiraishi,eds.,Network Power,7. 8 Ibid.,22,40. 9 Ikenberry, Liberal Leviathan, 2. 10 Ibid. origin of this order can be traced back to the international order that was prevalent in the Western camp at the time of the Cold War confrontation, and this order encompassed the entire world once the Cold War came to an end. 11 The international liberal order is founded on three pillars: liberal market-led capitalism, liberal internationalism, and liberal democracy. The predominance of the liberal international order promoted the convergence of norms and values in terms of political regimes, economic structures, ideal domestic societies, and the management of international affairs. Furthermore, the aforementioned three pillars were factors that caused the development of Asian regionalism for approximately two decades after the end of the 1980s.
First, the penetration of economic liberalism bolstered various types of regional economic cooperation, because Asian countries started sharing the same model for their own economic development by introducing the market economy. Even before 1989, some communist countries, such as the People's Republic of China, the Soviet Union, and Vietnam, started reforming their economic system to partially introduce the market economy. The official cessation of political confrontation between the U.S. and communist camps further promoted this trend of market economy expansion and, subsequently, many countries started cooperating in the economic field to realize their own development. 12 Second, the influence of liberal internationalism on international and regional circumstances started increasing in the post-Cold War era and, subsequently, international cooperation and institutionalization started being perceived as acceptable and relevant measures to solve different issues, rather than as measures signifying the pursuit of national interests by each individual stakeholder. Liberal internationalism emphasizes the aspect of a non-zero-sum game of international affairs and strongly legitimatizes the peaceful settlement of disputes among countries, rather than the crude logics of power politics. A multilateral approach with international and regional cooperation had more legitimacy than ever before, and many international and regional institutions were developed to address various global and reginal issues. In Asia, peace and stability were sustained under the international order based on the credibility of the United States' power and the widespread belief in the country's commitment to peacekeeping efforts to use its power to sustain the order. 13 The feasibility of liberal internationalism in this region was founded on the credibility of the United States' power and the country's commitment.
Even though the pace of institutional development was slow and the principle of "respect of sovereignty" were too strong to prevent Asian countries from drastically and effectively promoting regional cooperation, the advancement of regionalism and multilateral institutions in Asia was encouraged and further development was expected. Finally, the norms and values of liberal democracy became prominent after the collapse of the communist camp. Although the People's Republic of China, Vietnam, and Laos have maintained their communist-party regime, democratization aiming the establishment of liberal democracy became the critical issue in many Asian countries. Further, the 11 Ikenberry, After Victory. 12 Fausett and Hurrell, Regionalism in World Politics. 13 Bisley, "Contested Asia's 'New ' Multilateralism," 227,228. norms and values of liberal democracy strongly influenced the countries to undertake the reformation or limited reformation of their political systems. Such a trend signficiantly affected the agendas of Asian institutions, such as the ASEAN. ASEAN member countries actually started discussing human rights and democratization issues in the early 2000s, though they hardly did or could do it in the 1990s. The collapse and weakening of authoritarian regimes in some Southeast Asian countries, including Indonesia, triggered an attitudinal change. In addition to such indigenous regime changes, the increasing legitimacy of democratization worldwide encouraged ASEAN member countries to formally solve this issue. The ASEAN Charter, which was signed in 2007 and effectuated in 2008, set the promotion of democracy and protection of human rights as a part of the Association's objectives. 14 Furthermore, the ASEAN established the ASEAN Intergovernmental Commission on Human Rights in 2009 and adopted the ASEAN Human Rights Declaration in November 2012. 15 3. New waves of regionalism under a changing regional order As discussed in the previous section, the development of regionalism in Asia for nearly two decades after the end of the Cold War was a part of the new regionalism wave and was simultaneously encouraged by the global predominance of the liberal international order. Hence, the retreat of the liberal international order, which was based on U.S. hegemony according to many scholars, strongly affected the political and economic situations and trajectory of regionalism in this region. 16 Scholars point out various evidences to or concrete phenomena proving the retreat of the liberal international order. The first component is the relative decline of the hard and soft power of the United States. The triggers for the decline of the United States' power were as follows: First, the then Bush administration's unilateralism created doubts among countries regarding the reality of liberal internationalism. In particular, the United States' interference in the Iraq War despite strong global criticism disappointed many European countries, except the United Kingdom. The second trigger was the World Economic Crisis, which began in 2008. The crisis, which caused severe economic damage to the United States and the rest of the world, created doubts on the durability of the liberal economy and resulted in the rise of emerging economies, such as China and India, whose damages were relatively lighter and economies could be revived sooner than those of advanced countries.
The second phenomenon proving the retreat of the liberal international order is the rapid expansion of China's economy, which caused an increase in the country's political leverage in both the global and regional spheres; subsequently, the hegemonic power of the United States has been declining, although China cannot immediately replace the United States' hegemonic position. This change in the power balance between the two countries is the foundation of the liberal international order. Furthermore, China's assertive attitude on sovereignty-related issues in East China Sea and South China Sea damages the reliability of rules and norms of liberal internationalism. For example, the rapid reclamation of regions by China in some islands and rocks in the South China Sea violated the rule of no change in current status by the use of force and eroded mutual trust and stability, which is indispensable in the management of international relations resulting from internationalism. China is often depicted as the "challenger" to or "revisionist power" against the existing United States-led international liberal order. The escalation of the Sino-U.S. confrontation reflects changes in the aforementioned power balance between the two countries, rather than the "uniqueness" of the current Trump administration's foreign policy.
However, interestingly, efforts to promote regionalism were more active in 2010 as a result of the period's turbulent global and regional circumstances. First, the Obama administration actively attempted to include their country as an actor in several regional groupings, such as the EAS and the ADMM+, and this United States-centered regional multilateralism approach in the context of its "rebalance policy" helped advance Asian regionalism. Furthermore, while maintaining the former Bush administration's interest in the Trans-Pacific Partnership (TPP), Obama formally started expanded TPP negotiations in 2010. Both the administrations expected the TPP to be instrumental in revitalizing the U.S. economy, which was damaged by the preceding World Economic Crisis.
The decline of U.S. hegemonic power created an eagerness among nations to adopt a regional multilateral approach. To clarify this point, in the early 1990s, Donald Crone examined the reason why the APEC was established in 1989. 17 He argued that it was difficult to establish a government-level regional institution in the Asia-Pacific until U.S. hegemony became so dominant that the regional power structure was vertical, because the United States did not have any interest in building any regional framework that may bind the country's behavior in a region. The relative decline of U.S. hegemonic power caused the transformation of the regional power structure from a vertical to a horizontal structure; subsequently, the United States became amenable to establishing a regional grouping to supplement its insufficient leverage. 18 This argument can explain the regional multilateral approach adopted by the Obama administration.
In general, the Trump administration's foreign policy is strongly driven by the bilateral approach. Further, President Trump strongly criticizes multilateralism and denies the importance of regional trade agreements (RTAs), such as the North America Free Trade Agreement (NAFTA) and TPP. The Trump administration renegotiated the contents of the NAFTA and transformed it to a new agreement following its "American First" principle and withdrew from the TPP, as mentioned earlier. However, the Free and Open Indo-Pacific (FOIP) proposed by the Trump administration (which is argued in detail later in this paper) seems to promote its "decoupling" toward China by calling on followers and collectively putting the proposal forward with the followers in the Indo-Pacific region.
Further, China started taking a strong initiative to promote its own regional approach. In the 2000s, China was already an active player in Asian regionalism, as evidenced by the upgrading of the Shaghai-5 to the SCO, and started its involvement in the ASEAN-led regional architecture by actively promoting the China-ASEAN free trade agreement (FTA), its efforts to increase its role in the CMI and other regional cooperation schemes, and its attempts to establish the EAS as a tool to create an East Asian community. After Xi Jinping became the "paramount leader" in 2012, China's regionalism initiative underwent further enhancement. In addition to expanding its role in the existing regional architecture, including the ASEAN-led institutional architecture, China started demonstrating its unified regional and global vision by proposing the One Belt One Road/Belt and Road Initiative (BRI), Asian Infrastructure Investment Bank (AIIB), and New Asian Security concept. In the 2010s, China's initiative was supported by its strong economy, since the country was the second largest economic power in the world, and the increase in its political leverage. Hence, China's initiatives, such as the BRI and AIIB, have a significant impact on and strongly influence the perceptions of political and business elites in Asian countries. Furthermore, China's active initiatives have affected regional circumstances, including the structure of the regional architecture in Asia.
Further, the BRI does not aim to build any formal institution. However, the Xi administration proposes a vision for a desirable international and regional order in the BRI and encourages its followers to extend their cooperation to realize this vision. Today, Japan is adopting a multilayered regional multilateral approach by engaging both the TPP/CPTPP and RCEP and by promoting the FOIP. Further, ASEAN member countries are attempting to advance Southeast Asian regional integration by establishing the ASEAN Community (AC), and they reiterate the importance of "ASEAN centrality" and ASEAN-led initiatives in various occasions.
Moreover, Japan has become a more active player in the advance of regionalism. During the 2000s, China and Japan started competing over the leadership position, which acted as the impetus to accelerate efforts to promote regionalism and regional cooperation during this era. 19 In 2010, Japan's multilateral regional approach became a critical aspect of its foreign policy. Since Japan proposed the idea of Comprehensive Economic Partnership (CEPEA), ASEAN+6, in 2005, Japan and China, which supported the idea of East Asia Free Trade Agreement (EAFTA), ASEAN+3, had competed for the membership of the regional economic integration framework in East Asia. 20 By compromising with China on the membership issue, Japan supported the initialization of new RTAs and the RCEP, for which negotiations had actually started in early 2013. In addition, Japan decided to join the United States-led TPP negotiation and attempted to direct it toward the realization of high-standard and high-quality RTAs.
ASEAN member countries proposed the Association's commitment to regional integration and cooperation, as well. In other words, the RCEP negotiation is a venture to integrate each ASEAN+1 FTA signed during the 2000s with mainly six powers out of Southeast Asia into a mega-RTA in East Asia as a part of the ASEAN-centered regional architecture. Hence, despite having economic sizes lower than those of China and Japan, ASEAN member countries attempted to maintain their leverage to determine the trajectory of the RCEP negotiation. Moreover, the ASEAN leaders declared the establishment of the AC, comprising the following three pillars: the ASEAN Political-Security Community, ASEAN Economic Community (AEC), and ASEAN Socio-Cultural Community. At the same time of declaring the establishment of the AC, they revealed a new blueprint, the ASEAN Blueprint 2025, listing the objectives to be realized over the next one decade and the concrete actions to be taken. 21 The regional concept of the Indo-Pacific emerged during the early 2010s, following which its importance as a term depicting a strategic arena in the area extending from the Indian Ocean to the Pacific Ocean has been widely recognized. The advocates of this concept in the United States, Australia, Japan, and India considered it a viable measure to balance the rise of China as a world power. Since the mid-2010s, Japan's political leaders have been promoting the "Free and Open Indo-Pacific" in their speeches, for example, Foreign Minister Kishida's statement in 2016 and Prime Minister Shinzo Abe's statement in the TICAD IV in Nairobi in August 2016. 22 Japan's FOIP aims to "improve 'connectivity' between Asia and Africa, and Indian and Pacific Ocean, and then promote stability and prosperity of the region as a whole" by developing "free and open maritime order in the Indo-Pacific region as 'international public good'." 23 Besides Japan, some other actors, such as the United States, Australia, India, and recently the ASEAN, have mentioned their own visions under the regional concept of the Indo-Pacific. None of the visions on FOIP or Indo-Pacific aim to establish formal regional institutions. However, the actors provide a regional vision, declare their will to provide necessary assistance to supporting countries in realizing the common vision, and encourage these countries to cooperate in relevant efforts. The Abe administration clarifies three pillars of cooperation under the FOIP: the promotion and establishment of the rule of law, freedom of navigation, free trade, and so on; the pursuit of economic prosperity by means of quality infrastructure development and cooperation in education and training; and commitment to peace and stability by promoting capacity-building efforts and humanitarian assistance and disaster relief (HA/DR). Further, the Trump administration intends to support Indo-Pacific nations by implementing infrastructure projects "by reforming our development finance institution" and build stronger partnerships to ensure common security goals among Indo-Pacific nations. 24 However, the countries who propose the Indo-Pacific vision have diverse interests. In particular, they differ in their perspectives regarding their interactions with China. Although the Abe administration put the tone of balancing against China at the first stage of proposing the FOIP, Japan's FOIP currently considers a more inclusive approach toward China and implies the possibility of collaborating with China's BRI. The reason is that Japan is improving its relationship with China. So Japan cannot maintain an exclusive attitude toward China. In addition, the ASEAN's vision on the Indo-Pacific explicitly includes China as a part of the Indo-Pacific partnership that it aims to establish. 25 On the other hand, the United States' FOIP adopts a tougher approach toward China, whereby China is considered "a revisionist power" against the existing regional and liberal international order. 26 Although the specific contents of the vision are diverse, many countries are currently mentioning the Indo-Pacific vision, which can be considered an aspect of the activation of orientation of the regional multilateral approach in Asia.
Activation of regionalism and the institutional hedging strategy
As mentioned in the previous section, regionalism has been becoming increasingly active since the end of the 2000s, which marked the beginning of the decline of the liberal international order. The main reason for the activation of regionalism is that Asian countries, including China, Japan, ASEAN member countries, and the United States are now actively using regionalism as a political measure to ensure the realization of their own interests in the uncertain circumstances caused by the decline of the existing international order.
Such a political orientation for regionalism attributed by the main players in Asia caused the overlapping of regionalism in this area. Many scholars have already argued about these complex situations. For example, Kai He calls them "contested multilateralism 2.0" 27 in relation to developing the "institutional balancing" concept, which was also proposed by him. 28 According to his definition, institutional balancing is a state's balancing behavior manifested through multilateral institutions contrary to "soft balancing." 29 In other words, it is "a new type of balance of power strategy through which states can use multilateral institutions instead of traditional military means to compete for power and influence in world politics." 30 He argues that the development of ASEAN-centered regional institutions in the 1990s and 2000s is "multilateralism 1.0," and the World Economic Crisis triggered the beginning of the new phase of Asian regionalism, contested multilateralism 2.0, in which major world powers, such as China, Japan, the United States, Australia, and other nations in Southeast Asia, rather than ASEAN member countries, drive the development of Asian regionalism by using regional multilateralism as a tool to pursue their interests.
While criticizing some points of Kai He's argument on contested multilateralism 2.0, Nick Bisley points out the increasing competitive characteristics of Asian regionalism and uses the term contested multilateralism to describe the recent status of Asian regionalism. 31 He observed that Asia's multilateralism has entailed only a single expansion since the 1990s and categorized various regional institutions that were established until now in Asia into three groups: (1) ASEAN-led regional institutions, such as the ARF, APT, the set of ASEAN+1 groups, ADMM, and ADMM+; (2) regional institutions 26 Department of Defense, "Indo-Pacific Strategy Report," 7. 27 He, "Contested Multilateralism 2.0." 28 The term contested multilateralism was originally proposed by Morse and Keohane. According to the authors, it refers to "the situation that results from pursuit of strategies by states, multilateral organizations, and non-state actors to use multilateral institutions" to describe competitive regime creation. For more details, see Morse and Keohane, "Contested Multilateralism," 385-412. Based on his admission, He's "Contested Multilateralism 2.0" was inspired by their argument, although there are critical differences between the concepts proposed by the authors. First, He considers nation-states to be dominant actors superseding other actors, such as sub-state and non-state actors, in contested multilateralism 2.0. Second, He's argument focuses on the competition between nation-states at intra-and inter-institutional levels. For details, see He, "Contested Multilateralism 2.0," 211. 29 He, Institutional Balancing, 10. 30 He, "Contested Multilateralism 2.0," 211. 31 Bisley, "Contested Asia's 'New' Multilateralism." led by China's desire to reshape the regional order, such as the AIIB, SCO, CICA, and BRI; and (3) regional frameworks organized around American regional primacy and its economic and strategic interests, including the APEC, Trilateral Security Dialogue, Proliferation Security Initiative, Shangri-La Dialogue, and ADB. 32 Further, Ellen Frost advanced the concept of "rival regionalism." 33 "Rival regionalism" refers to the regionalism led by countries that are indifferent or even hostile toward the United States, and it provides an alternative to U.S. and Western leadership by creating or revitalizing non-Western organizations. The leading advocates of rival regionalism are China and Russia and organizations such as the SCO, New Development Bank, China-ASEAN free trade agreement, BRI, CICA, and the Network of East Asian Think-Tank (NEAT). According to this argument, the recent activation of regionalism in Asia has strong confrontational characteristics.
Moreover, it cannot be denied that the recent strengthening of regionalism in Asia has been contested. However, the development of various regional institutions and proposals on regional visions to encourage cooperation and collaboration expresses more complex realities than a simple explanation of only the contested orientations of Asian players. All the arguments based on contested regionalism, contested multilateralism 2.0, and rival regionalism both implicitly and explicitly suppose that the world or regional players can be divided into the following two groups: one group that is attempting to preserve the existing international and regional order, and the second group that is attempting to revise the existing order and replace it with a new one. Further, the arguments assume that the former group is led by the United States and the latter group by China and Russia. Although such a dichotomy reflects a part of the reality, it oversimplifies the complex situations, as described here.
First, there are many countries that have joined several regional institutions that can be considered "contesting" institutions. A typical case is that Japan, Australia, New Zealand, Singapore, Brunei, Malaysia, and Vietnam have joined both the RCEP and TPP/CPTPP. Further, four Southeast Asian countries among them have joined the AEC. In other words, small countries in Asia seem to hedge the risk by undertaking a "diversification" approach toward trials to promote regional integration.
In addition, China and Japan began to demonstrate the improvement of Sino-Japan relationship, and actually try to promote a collaboration between them to conduct assistance of infrastructure development in Asia. Prime Minister Abe stated in his policy speech to the 196 th Session of the Diet in January 2018 that "we will also work with China to meet the growing infrastructure development in Asia. 34 Shong Shan and Japan's Economic Minister Hiroshige Seko joined it, and 52 memorandums of joint cooperation to the third countries were actually signed. 36 Second, whether China is attempting to revise or recreate the international or regional order remains uncertain. However, it cannot be denied that the rise of China as a global power is both politically and economically remarkable, and the country's efforts to modernize its military and build naval power are affecting the strategic balance in Asia. Nevertheless, the norms and values that China is attempting to realize remain unclear, and there is no information on whether China has a will to build or replace the existing world and reginal order. The BRI seems to be part of China's grandiose project to realize this aim. However, various stakeholders including both state and non-state actors are engaged in the BRI; they include the central government and state-owned enterprises, the local government and local enterprises, and private enterprises. Hence, the BRI is driven by different interests of various stakeholders, rather than by the single and consistent ambition of Beijing to expand China's sphere of influence.
In addition, China is emphasizing only a win-win situation and is not proposing any new norms and values that obviously differ from those of Western countries. At the discourse level, Xi advocated the introduction of open economy in the This is partly an attempt to counter the "American First" policy and protectionism of the Trump administration. Simultaneously, the Chinese economy actually was/is able to develop in the context of deepening economic globalization led by liberal economy. The collapse of the free and open market economy is not conducive to China's long-term interests; hence, it does not consider a choice of being a revisionist power, at least economically, relevant.
China can be considered a challenger to the liberal international order, since it not only catches up with the economic predominance of the United States but also provides a "China model" for governance and economic development, which refers to the economic development that is accomplished by "state capitalism" or "Beijing consensus." 38 The China model reveals that economic development, at least automatically, does not result in democratization; rather, it sustains the state-led authoritarian regime. This model seems to have an influence on several countries, including some Asian countries like Cambodia, and the provision of such an alternative to the liberal democracy model is a challenge to the liberal international order. However, it is not clear whether China or the Chinese government will intentionally use this model as a political tool to expand its leverage. In October 2015, Obama stated in Congress that "we can't let countries like China write the rules of the global economy." 39 However, it is noted that Obama gave this speech to a domestic audience, particularly the members of Congress, to obtain their support for the TPP.
Third, arguments pertaining to contested multilateralism focus on the wills and behaviors of great powers, such as China and the United States, alone and consider the creation and fostering of regional institutions a political tool used by these countries to maintain or expand their leverage. However, today, international and regional circumstances are shifting from a unipolar to a multipolar structure; in such a multipolar world, 36 JETRO, "1st Japan-China Third Country Market Cooperation Forum". 37 Xi, "Seizing the Opportunity." 38 Halper, Beijing Consensus. 39 Obama, "Statement by the President." the behaviors of middle and small powers are critical in determining the direction of the international and regional order, since any great power requires a large number of followers and their strong support. Accordingly, they cannot ignore the demands, requests, and interests of middle and small powers. From this perspective, the creation and revitalization of a regional institution cannot be considered the result of a single country's strategy. Rather, both great powers and middle and small powers are involved in the negotiation process of creating or revitalizing regional institutions, and their diverse interests, which are sometimes strongly confronted and contested by others, finally lead to either the creation or revitalization of a regional institution or the collapse of the negotiation.
To understand the complex situation involving the enhancement of Asian regionalism and avoidance of the oversimplification caused by the adoption of only the contested regionalism model, we consider three points: First, we clarify that both the balancing of power and institutional hedging of state actors as a response to the increasing strategic and political uncertainty, as well as their strengthening economic interdependence, caused the complex situations pertaining to regionalism in Asia. On partly applying He's definition of institutional balancing, institutional hedging is a state's hedging behavior that manifests through multilateral institutions and a type of hedging effort performed by states that use multilateral institutions, instead of traditional military measures, to compete for power and influence in international politics. 40 In a world where increasing strategic uncertainty is being caused by the disrupted liberal international order, any state, including great powers, must hedge any risk and make various institutional choices, rather than focusing on a single choice. Such behaviors by any power in Asia may cause the activation of regionalism under the unclear and uncertain situations caused by the decline of the international liberal order.
Second, an awareness of the behaviors and reactions of middle and small powers is indispensable to understanding the complexity involved in the development of regionalism in Asia. 41 Such an awareness can be created by examining the process of creation of the ASEAN-led regional architecture in the 1990s and 2000s and evaluating the ASEAN's reactions toward the efforts made by world powers, such as China, the United States, and Japan, to determine the trajectory of development of regionalism and the form of the regional order. The reason is that the power of any great power to determine the nature of international situations and international order will be limited in a multipolar world. In such a world, the choices of small powers play a critical role in determining the characteristics of an international and regional order.
Prospects
Langenhove's original argument on multilateralism 2.0, which explains how a new type of multilateralism emerged in early 2010, put across the premise that the world order was shifting from a United States-centered unipolarity to a network-formed multipolarity. 42 The argument focused on Europe; hence, we should be careful about adapting it to Asian 40 About the definition of "institutional balancing", see He, "Contested multilateralism 2.0," 211. 41 For the important role of middle and small powers in regionalism in Asia, see Flemes ed., Regional Leadership. 42 Van Langenhove, "Transformation of Multilateralism," 263. experiences. However, even though the rise of China is remarkable and U.S. hegemonic power continues to exist, the regional circumstances in Asia are certainly shifting toward a multipolar structure. This paper highlighted that regionalism and regional institutions in Asia are currently developing as a result of the shift from unipolarity to multipolarity caused by the weakening of the United States-led liberal international order, which had previously provided the foundation for the development of Asian regionalism. While facing such a high-level strategic uncertainty in the international arena, both great powers and middle and small powers started pursuing their own institutional hedging strategies, and their efforts caused the activation of regionalism, as well as the development of a complex and multilayered regional institutional structure.
Many important topics are outside the scope of this paper. Among them, the most important topic is clarifying the role of non-state actors in the development of regionalism in Asia. Langenhove pointed out the growing importance of non-state actors, such as supranational regional organizations and sub-state regions, in the international arena and the growing space of citizen involvement. In the Asian context, the behaviors of nonstate actors, such as regional institutions, global enterprises, local companies, local governments, nongovernmental organizations, civil movement groups, interest groups, and citizens and individuals, significantly influence the negotiations on regional multilateralism among state actors. Today, ordinally people from all countries in Asia are increasingly expressing their opinions on topics ranging an increase in the participation of civilians in politics to the emergence of extreme nationalism and religious identity politics. Economic interdependence, or regionalization, in Asia has been driven by private enterprises, and it has affected the trajectory of development of regionalism. The importance of non-state actors in promoting Asian regionalism is increasing, although nation-states continue to be the prominent actors in international politics. | 2019-11-22T01:04:49.800Z | 2019-07-03T00:00:00.000 | {
"year": 2019,
"sha1": "c577d2baf14cd5eb8ea462b91115a7d048741327",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/24761028.2019.1688905?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "db5a01596040f797dfb409d1ad7dacbf03add67d",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
} |
12293675 | pes2o/s2orc | v3-fos-license | ARCHITECTURE COMPUTERS
Large-scale simulations of the nonlinear Schrödinger equation (NLSE) are required in the solution of many problems in fiber optics—among them accurate modeling of wavelength division multiplexed transmission systems. The split-step Fourier (SSF) method, commonly used in the numerical solution of the NLSE, often proves too slow in serial versions, even on the fastest workstations. In this article, we present a paral-lelization of the SSF algorithm that is appropriate for multiprocessors. Most commercial multiprocessors support both shared-memory and distributed-memory programming paradigms. As described here, we have implemented the SSF method under both of these parallel programming paradigms on a four-processor Silicon Graphics/Cray Research Origin 200 multiprocessor computer. The NLSE,
Introduction
The Nonlinear Schrödinger equation (NLSE) iA t + σd is a nonlinear partial differential equation that describes wave packet propagation in a medium with cubic nonlinearity. Technologically, the most important application of NLSE is in the field of nonlinear fiber optics [1,2]. The parameter σ specifies the fiber anomalous group velocity dispersion (σ = 1) or normal group velocity dispersion (σ = −1), while the parameter d defines the normalized absolute value of the fiber's dispersion. The perturbation G is specified by details of the physical fiber being studied.
In the special case G = 0, NLSE is integrable [3] and can be solved analytically. In general if G = 0 NLSE must be solved numerically. One of the most popular numerical methods to solve the perturbed NLSE is the Split-Step Fourier method (SSF) [2]. For small-scale calculations, serial implementations of SSF are adequate; however, as one includes more physics in the simulation, the need for large numbers of Fourier modes to accurately solve NLSE equation demands parallel implementations of SSF.
Many fiber optics problems demand large-scale numerical simulations based on the SSF method. One class of such problems involves accurate modeling of wave-length division multiplexed (WDM) transmission systems where many optical channels operate at their own frequencies and share the same optical fiber. WDM is technologically important as it is one of the most effective ways to increase the transmission capacity of optical lines [1,2,4]. To accurately model WDM one needs to include large numbers of Fourier harmonics in the numerical simulation to cover the entire transmission frequency band. Moreover, in WDM systems different channel pulses propagate at different velocities and, as a result, collide with each other. At the pulse collision, Stokes and anti-Stokes sidebands are generated; these high frequency perturbations lead to signal deterioration [5,6]. Another fundamental nonlinear effect called four-wave mixing (FWM) [7] must be accurately simulated as the FWM components broaden the frequency domain which requires even larger numbers of Fourier modes for accurate numerical simulation.
To suppress the FWM [5,6] and make possible the practical realization of WDM, one can use dispersion management (concatenation of fiber links with variable dispersion characteristics). The dispersion coefficient d in NLSE is now no longer constant but represents a rapidly varying piecewise constant function of the distance down the fiber. As a result, one must take a small step size along the fiber to resolve dispersion variations and the corresponding pulse dynamics. A final reason to include a large number of Fourier modes in numerical simulations is to model the propagation of pseudorandom data streams over large distances.
All of the above factors make simulation of NLSE quite CPU intensive. Serial versions of the split-step Fourier method in the above cases may too be slow even on the fastest modern workstations. To address the issue of accurately simulating physical telecommunication fibers in a reasonable amount of time, we discuss the parallelization of SSF algorithm for solving NLSE. Our parallel SSF algorithm is broadly applicable to many systems and not limited to the solution of NLSE. We consider an algorithm appropriate for multiprocessor workstations. Parallel computing on multiprocessor systems raises complex issues including solving problems efficiently with small numbers of processors, limitations due to the increasingly complex memory hierarchy, and the communication characteristics of shared and distributed multiprocessor systems. Current multiprocessors have evolved towards a generic parallel machine, which shares characteristics of both shared and distributed memory computers. Therefore most commercial multiprocessors support both shared memory and distributed memory programming paradigms. The shared memory paradigm consists of all processors being able to access some amount of shared data during the program execution. This addressing of memory on different nodes in shared memory multiprocessors causes complications in writing efficient code. Some of the most destructive complications are: cache hierarchy inefficiency (alignment and data locality), false sharing of data contained in a cache block, and cache thrashing due to true sharing of data. Most vendors provide compiler directives to share data and divide up computation (typically in the form of loop parallelism) which in conjunction with synchronization directives can be used to speed up many sequential codes. In distributed memory programming, each processor works on a piece of the computation independently and must communicate the results of the computation to the other processors. This communication must be written explicitly into the parallel code, thus requiring more costly development and debugging time. The communication is typically handled by libraries such as the message passing interface (MPI) [8] which communicates data through Ethernet channels or through the existing memory system. Our primary goal is to present a parallel split-step Fourier algorithm and implement it under these two different parallel programming paradigms on a 4-processor Silicon Graphics/Cray Research Origin 200 multiprocessor computer.
The remainder of this paper is organized as follows. In Section 2, we recall a few basics of the the split-step Fourier method. In Section 3, we introduce the parallel algorithm for SSF. Timing results and conclusions are given in Sections 4 and 5, respectively.
Split-Step Fourier Method
The Split-Step Fourier (SSF) method is commonly used to integrate many types of nonlinear partial differential equations. In simulating Nonlinear Schrödinger systems (NLS) SSF is predominantly used, rather than finite differences, as SSF is often more efficient [9]. We remind the reader of the general structure of the numerical algorithm [2].
NLS can be written in the form: where L and N are linear and nonlinear parts of the equation. The solution over a short time interval τ can be written in the form where the linear operator in NLS acting on a spatial field B(t, x) is written in Fourier space as, where F denotes Fourier transform (FT), F −1 denotes the inverse Fourier transform, and k is the spatial frequency.
We split the computation of A over time interval τ into 4 steps: Step 1. Nonlinear step: Compute A 1 = exp(τ N)A(t, x) (by finite differences); Step 2. Forward FT: Perform the forward FFT on A 1 : Step 3. Linear step: Compute A 3 = exp(τ L)A 2 ; Step 4. Backward FT: Perform the backward FFT on A 3 : To discretize the numerical approximation of the above algorithm, the potential A is discretized in the form: A l = A(lh); l = 0, . . . , N − 1, where h is the space-step and N is the total number of spatial mesh points.
The above algorithm of the Split-Step Fourier (SSF) method is the same for both sequential and parallel code. Parallel implementation of this algorithm involves parallelizing each of the above four steps. the independent computation over subarrays of spatial elements of A(l). Therefore P processors each will work on sub-arrays of the field A, e.g., the first processor updates A 0 to A (N/P −1) , the second processor updates A N/p to A 2(N/p)−1 , etc.
In the 1D-FFT, elements of (F A) k can not be computed in a straightforward parallel manner, because all elements A l are used to construct any element of (F A) k . The problem of 1D FFT parallelization has been of great interest for vector [10,11] and distributed memory computers [12]. These algorithms are highly architecture dependent, involving efficient methods to do the data re-arrangement and transposition phases of the 1D FFT. Communication issues are paramount in 1D FFT parallelization and in the past have exploited classic butterfly communication patterns to lessen communication costs [12]. However, due to a rapid change in parallel architectures, towards multiprocessor systems with highly complex memory hierarchies and communication characteristics, these algorithms are not directly applicable to many current multiprocessor systems. Shared memory multiprocessors often have efficient communication speeds, and we therefore implement the parallel 1D FFT by writing A l as a two dimensional array, in which we can identify independent serial 1D FFTs of rows and columns of this matrix. The rows and columns of the matrix A can be distributed to divide up the computation among several processors. Due to efficient communication speeds, independent computation stages, and the lack of the transposition stage of the 1D FFT in SSF computations, we show that this method exploits enough independent computation to result in a significant speedup using a small number of processors.
Algorithm of Parallel SSF
The difficulty parallelizing the split-step Fourier algorithm is in steps 2 and 4, as the other two steps can be trivially evolved due to the natural independence of the data A and A 2 . In Step 2 and Step 4 there are non-trivial data dependences over the entire range 0 <= l <= N of A 1 (l) and A 3 (l) which involve forward and backward Fourier Transforms (FFT and BFT). The discrete forward Fast Fourier Transform (FFT) is of the form which requires all elements of A(l). Several researchers have suggested parallel 1D Fast Fourier Transform algorithms [10,11,12], but to date there exist no vendor-optimized parallel 1D FFT algorithms. Therefore implementations of these algorithms are highly architecture dependent. Parallel 1D FFT algorithms must deal with serious memory hierarchy and communication issues in order to achieve good speedup. This may be the reason why vendors have been slow to support the computational community with fast parallel 1D FFT algorithms. However, we show below that we can get significant parallel speedup due to the elimination of the transposition stage in 1D FFT for SSF methods and due to exploitation of independent computation by performing many sequential 1D FFTs on small subarrays of A(l).
Our method of parallelizing the SSF algorithm requires dividing the 1D array A(l) into subarrays which are processed using vendor optimized sequential 1D FFT routines. We assume the 1D array A is of the dimension N of the product of two integer numbers: N = M 0 × M 1 . Therefore A can be written as a 2D matrix of size M 0 × M 1 . As a result, we can reduce the expression for the Fourier transform of the array A to the form The reduced expression Eq. (2) demonstrates that the N = M 0 * M 1 Fourier transform F is obtained by making M 0 size Fourier transforms of f (k 0 , n 0 )exp −2πi N n 0 k 0 for fixed k 0 . Therefore the 1D array A is written as a 2D matrix a jk of size M 0 × M 1 with elements (A(0), ..., A(M 0 − 1)) in the first column, (A(M 0 ), ..., A(2M 0 − 1)) in the second column, etc. We use this matrix a jk in our parallel FFT-algorithm: Step 1. Independent M 1 -size FFTs on rows of a jk .
Step 2. Multiply elements a(j, k) by a factor E jk = exp(−(2πi/N) · j · k) Step 3. Independent M 0 -size FFTs on columns of a jk .
The result of Step 1 -Step 3 is the N = M 0 * M 1 1D Fourier transform of A stored in rows: (F (0), ..., F (M 1 − 1)) in the first row, (F (M 1 ), ..., F (2M 1 − 1)) in the second row, and so on. To regain the proper ordering of A (how elements were originally stored in matrix a jk ) requires a transposition of the matrix which is the last step in a parallel FFT algorithm.
In the SSF method, the transposition is not necessary as we apply a linear operator L(k) and then take the steps: Step 1 -Step 3 in reverse order. This avoids the transposition because one can define a transposed linear operator array and multiply a jk by this transposed linear operator. Then Step 1 -Step 3 are performed in reverse order with the conjugate of the exponential term in Step 2.
The complete SSF parallel algorithm consists of the following steps: Step 1. nonlinear step Step 2. row-FFT Step 3. multiply by E Step 4. column-FFT Step 5. linear step (transposed linear operator) Step 6. column-BFT Step 7. multiply by E * (the complex conjugate of E) Step 8. row-BFT The parallelization is due to the natural independence of operations in steps 1, 3, 5, and 7 and by the row and column subarray FFTs in steps 2, 4, 6, and 8. The row and column subarray FFTs of size M 1 and M 0 are performed independently with serial optimized 1D FFT routines. Working with subarray data, many processors can be used to divide up the computation work resulting in significant speedup if communication between processors is efficient. Further, smaller subarrays allows for better data locality in the primary and secondary caches. The implementation details of the shared-memory and the distributed memory parallel SSF algorithm outlined above depend on writing Steps 1 -8 using either shared memory directives or distributed memory communication library calls (MPI).
Shared Memory Approach
Much of the SSF parallel algorithm outlined above can be implemented with "$doacross" directives to distribute independent loop iterations over many processors. The FFTs of size M 0 and M 1 are implemented by distributing the 1D subarray FFTs of rows and columns over the P available processors. The performance can be improved drastically by keeping the same rows and columns local in a processor's secondary cache to alleviate true sharing of data from dynamic assignments of sub-array FFTs by the "$doacross" directive. The subarray FFTs are performed using vendor optimized sequential 1D FFT routines which are designed specifically for the architecture.
It is efficient to perform all column operations (Steps 3 -7) in one pass: copying a column into local sub-array S contained in the processor's cache and in order, multiply by the exponents in Step 3, perform the M 0 -size FFT of S, multiply by the transposed linear operator exp(τ L), invert the M 0 -size FFT, multiply by the conjugate exponents, and finally store S back into the same column of a. This allows for efficient use of the cache, reducing false/true sharing as we perform many operations on each subarray.
Distributed Memory Approach
The Massage Passing Interface (MPI) is a tool for distributed parallel computing which has become a standard used on a variety of high-end parallel computers to weakly coupled distributed networks of workstations (NOW) [8]. In distributed parallel programming, different processors work on completely independent data and explicitly use send and receive library calls to communicate data between processors.
To implement the distributed parallel SSF algorithm for the Nonlinear Schrödinger system (NLS), one needs to distribute the rows of array A among all P available processors. Then Steps 1 -3 can be executed without communication between processors. After these steps, it is necessary to endure the communication cost of redistributing the elements of A among the P processors. Each processor must send a fraction 1 P of its data to each of the other processors. Then each processor will have the correct data for Steps 4 -7 and column operations are performed independently on all P processors. Finally, there is a second redistribution prior to Step 8. To make T steps of the SSF algorithm, we use the following scheme: subroutine Distributed SSF distribute rows among processors Step 1. nonlinear step Step 2. row-FFT Step 3. multiply by a factor E for i = 1 to T − 1 do data redistribution Step 4. column-FFT Step 5. linear step Step 6. column-BFT Step 7. multiply by a factor E * data redistribution Step 8. row-BFT Step 1. nonlinear step Step 2. row-FFT Step 3. multiply by a factor E end do data redistribution Step 4. column-FFT Step 5. linear step Step 6. column-BFT Step 7. multiply by a factor E * Step 8. row-BFT end The large performance cost in this algorithm is the redistribution of data between row and column operations. If the row and column computational stages result in significant speedup compared to the communication expense of redistributing the matrix data, then this algorithm will be successful. This depends crucially on fast communication between processors which is usually the case for shared memory multiprocessors and less so for NOW computers.
Results
We performed timings of the parallel SSF algorithm on the Silicon Graphics/Cray Research Origin 200 multiprocessor. The Origin 200 was used because it allows for both shared and distributed memory parallel programming and models a generic multiprocessor. The Origin 200 is efficient at fine-grained parallelism which typically makes shared memory programming both efficient and easy. The Origin 200 workstation used in this study consisted of four MIPS R10000 64-bit processors (chip revision 2.6) with MIPS R10010 (revision 0.0) floating point units running at 180MZ. The primary cache consisted of a 32KB 2-way set-associative instruction cache and a 32KB 2-way set-associative data cache. Each processor also had a 1MB 2-way set-associative secondary cache. The machine had a sustained 1.26GB/sec memory bandwidth and 256MB of RAM.
The operating system was IRIX 6.4. We used a Mongoose f77 version 7.10 Fortran compiler. For the parallel programming we used MPI version 1.1 for the distributed computing and the native "$doacross" and synchronization directives provided by the f77 compiler for shared memory programming. All timings are of the total wall-clock time for the code to both initialize and execute.
Timings
For the following timings, we use M 0 = M 1 = 2 K , so that the entire 1D array is of size N = 2 2K . The one-processor implementation of parallel SSF was 10% to 20% faster than serial SSF code using vendor optimized 1D FFTs of the entire array of size N = 2 2K . This improvement is due to better cache coherence using smaller subarrays, as an entire subarray can be contained in the L1 cache and is due to the fact that the single processor parallel SSF does not do the transposition stage of the 1D FFT. All timings are compared to the one-processor parallel code at the same optimization level (compared to sequential SSF the below speedups are even more impressive). For shared memory parallel implementations, we find over the range of 2 12 < N < 2 18 that two node SSF implementations have good speedup (SU) with a maximum speedup at N = 2 16 . Using four nodes, for small array sizes we have 1/4 less work per processor, but more contention due to the sharing of pieces of data contained in the secondary caches of four different processors. At N = 2 16 , we again see the maximum speedup (now for 4 nodes), reflecting that the ratio of computational speed gain to communication contention is optimal at this problem size. Under the shared memory programming model, subarrays are continually distributed among processors to divide up the computational work. Data in a single subarray may be contained on one or more processors requiring constant communication. The data contained in each processor's L2 cache is of size O(N/P), where P is the number of processors. Contention in the memory system is modeled as being proportional to O((N/P ) 2 ) which reflects the communication congestion for sharing data of large working sets. Further unlike the serial code, the parallel code endures a communication time to send data between processors proportional to O(N/P )τ c , where τ c is the time to transfer a L2 cache block between processors. Finally, the time to perform the 1D FFT is approximately NLog(N)τ M , where τ M is the time to perform a floating point operation. A simple formula for the speedup (SU) of the shared memory FFT is where f is a small number reflecting contention in the communication system. If N = 2 K we can simplify the above expression, where the constants are absorbed into f and ξ. With f = 0 (no contention) one predicts for fixed P that the speedup increases for larger and larger problem size N. However, for f = 0 the speedup eventually decreases with larger N due to contention of communicating small pieces of subarray data between arbitrary processors. This equation reflects the trend seen in our empirical data of speedup for shared memory SSF, where speedup attains a maximum with problem size at N = 2 16 . The above SU formula must be reinterpreted for distributed SSF due to the implicit independent computational stages where no data is communicated between processors unlike shared memory SSF. Distributed SSF uses communication stages to send data between processors and does not involve contention due to sharing data between P processors during computation stages. Distributed MPI timings are compared to a one-processor MPI code at the same optimization level. The MPI one-processor code was faster than one-processor shared memory code, as it did not have synchronization steps. The parallel timings were typically faster than the shared memory parallel code, except for the N = 2 16 array size for which the shared memory code did slightly better. We find that for distributed memory parallel implementations of SSF over the range of 2 12 < N < 2 18 two-node implementations have good speedup with maximum speedup at N = 2 14 , beyond which the communication cost increases and the computation/communication ratio decreases for larger problem size. The communication cost is different in the MPI case than for shared memory, as data is communicated in "communication stages" so less than perfect speedup (SU) is due to the volume of data communicated between processors in redistribution stages. Using four nodes, we find that the speedup increases with the working set N. This is due to both making the computation stages faster O(NLog(N)/8) and by communicating only O(N/16) of data between single processors in the redistribution stage. For small problem size there is not enough work to make dividing the problem among 4 processors beneficial. The speedup in the distributed SSF algorithm is attributed to the independence of data contained in a processor's local cache between data rearrangement stages, which is not true for the dynamic assignment and sharing of subarray data throughout computational stages in shared memory SSF implementations. These results are encouraging in that the speedup in multiprocessor SSF implementations is considerable. Speedup over sequential code using vendor optimized full array 1D FFT is even greater. We recommend implementing the parallel SSF algorithm even on sequential machines due to the 10% to 20% speedup over optimized 1D sequential SSF algorithms. This reflects a better use of the L1 cache and data locality by using small subarrays and removing the transposition stage of the 1D FFT in SSF. For shared memory implementations of parallel SSF, the maximum speedup requires balancing contention in the communicating data contained over more than one processor to the computation performance gain of using small subarrays. For the distributed parallel SSF there is more data locality as data is distributed statically prior to the computational stages. This division of computation and communication stages is different than for shared memory SSF which dynamically distributes subarray FFTs and shares data on more than one processor. Distributed SSF speedup is a function of the number of processors P which reduces both the computational time and communication volume between single processors. The speedup of the parallel SSF is strongly dependent on reducing communication time and contention in the multiprocessor.
Conclusions
Multiprocessor systems occupy the middle ground in computing between sequential and massively parallel computation. In multiprocessor computing, one wants to write code to take advantage of between 2 and 16 processors to get good speedups over sequential code. Our parallel SSF method is designed for small numbers of tightly integrated processors to divide the 1D FFT into many subarray FFTs performed on P processors. The speedup depends on optimizing the computational speed gain to communication cost in order to speedup traditionally sequential numerical code. The shared memory parallel SSF algorithm does not scale with problem size N as subarray data is distributed over more than one processor causing increases in contention due to gathering large amounts of subarray data from many processors. The distributed memory parallel SSF algorithm uses independent data during computational stages and then uses expensive data redistribution stages. The communication cost of the data redistribution stages can be reduced by using more processors, which also decreases the time for the computation stage. Our results suggest that nearly perfect speedup can be achieved over sequential SSF algorithms by tuning the number of processors and problem size. The significant speedup over sequential code is broadly applicable to many sorts of code which depend crucially on speeding up the sequential 1D FFT and should be explored for other numerical algorithms. | 2014-10-01T00:00:00.000Z | 1999-01-01T00:00:00.000 | {
"year": 1999,
"sha1": "c6d6efaa73ba73a43c72565e50b1cabdbc1f48f5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "9c80f38a8c7e2a6993eef912535e746599a76776",
"s2fieldsofstudy": [
"Computer Science",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
268892045 | pes2o/s2orc | v3-fos-license | Renin–angiotensin system: Basic and clinical aspects—A general perspective
The renin–angiotensin system (RAS) is one of the most complex hormonal regulatory systems, involving several organs that interact to regulate multiple body functions. The study of this system initially focused on investigating its role in the regulation of both cardiovascular function and related pathologies. From this approach, pharmacological strategies were developed for the treatment of cardiovascular diseases. However, new findings in recent decades have suggested that the RAS is much more complex and comprises two subsystems, the classic RAS and an alternative RAS, with antagonistic effects that are usually in equilibrium. The classic system is involved in pathologies where inflammatory, hypertrophic and fibrotic phenomena are common and is related to the development of chronic diseases that affect various body systems. This understanding has been reinforced by the evidence that local renin–angiotensin systems exist in many tissue types and by the role of the RAS in the spread and severity of COVID-19 infection, where it was discovered that viral entry into cells of the respiratory system is accomplished through binding to angiotensin-converting enzyme 2, which is present in the alveolar epithelium and is overexpressed in patients with chronic cardiometabolic diseases. In this narrative review, preclinical and clinical aspects of the RAS are presented and topics for future research are discussed some aspects are raised that should be clarified in the future and that call for further investigation of this system.
Introduction
Renin was first identified more than a century ago, and since then the renin---angiotensin system (RAS) has been studied for its role in regulating blood pressure. Various elements and processes that are key in both cellular and systemic physiology have been incorporated, which has led to the suggestion that the RAS plays a role in pathologies that affect multiple organs and systems. The current work provides an overview of the RAS, covering preclinical and clinical aspects and raising some perspectives regarding the pathologies in which the RAS is involved. For this purpose, a review was carried out in the PubMed, Google Scholar, and Scopus databases using combinations of the keywords renin, angiotensin, ACE, angiotensin receptors, renin---angiotensin system, physiopathology and pharmacology.
The renin---angiotensin system
For many years, the study of the renin---angiotensin system focused on the kidney---heart axis. The kidney and heart form an axis whose function is to ensure tissue perfusion and body homeostasis; for this, the cardiovascular system provides blood circulation and the kidney maintains the blood volume and the amount of water in the body. 1 This cardiorenal interaction requires very fine regulatory mechanisms, one of which is the RAS, which involves a cascade of enzymatic reactions that result in a final effector, angiotensin II. This molecule works by modulating vascular tone and body water volume. However, accumulating evidence has identified the presence of an RAS in different tissues, including muscle, nervous system, bone, gonadal, gastrointestinal, immune, adipose, pancreatic, liver and circulatory tissue. 2 The RAS includes renin, which is produced by the juxtaglomerular apparatus in the kidney, and a liver protein called angiotensinogen, which is a target of enzymatic reactions that results in angiotensin II. Angiotensin II acts at several levels to ensure the regulation of cardiovascular and renal function and body homeostasis.
Renin
Renin (R) is an acid protease synthesised and stored by secretory vesicles in the juxtaglomerular cells of the dense renal macula. It is initially synthesised as a proenzyme, prorenin (PR), a 406 amino acid (aa) protein, which is cleaved and released into circulation as active renin (340 aa). A small proportion is released as prorenin; the renin/prorenin ratio in the plasma of healthy people is 10. This R/PR ratio is similar in individuals with primary high blood pressure (HBP) but increases to 40---50 in individuals with type 2 diabetes mellitus (DM2). The action of plasma renin is known as plasma renin activity and is measured through the concentration of angiotensin I that appears in plasma per unit of time (ng/ml/h). Stimuli for renin secretion include decreased blood pressure, sympathetic stimulus and humoral factors, such as angiotensin II, endothelins, prostaglandins, histamine, vasopressin, nitric oxide and dopamine. 3---5 Angiotensin Angiotensin II (AngII) originates from preangiotensinogen, a 485 aa liver-synthesised precursor protein, which by enzymatic action is transformed into angiotensinogen, a 452 aa protein that is then secreted into the circulation. 6 Glucocorticoids, oestrogens, thyroid hormones, cytokines and AngII stimulate the secretion of angiotensinogen. Once in circulation, angiotensinogen interacts with R to generate the peptide angiotensin I (AngI), a 10 aa peptide. The interaction of AngI with angiotensin-converting enzyme (ACE) gives rise to AngII, an active 8 aa peptide that targets angiotensin receptors 1 and 2 (AT1 and AT2). AngII has a half-life of one to two minutes and is degraded by peptides. The effect of AngII is derived from its action mainly on AT1 receptors present in vascular smooth muscle and other tissues. Effects include vasoconstriction, inflammation, fibrosis and cell proliferation. 2,7,8 However, other metabolites produced by the action of ACE2 and other peptidases have also been described (Table 1). ACE2 promotes the conversion of AngI to angiotensin 1-9 (Ang1-9) and the conversion from AngII to angiotensin 1-7 (Ang1-7). Similarly, the conversion from Ang1-9 to Ang1-7 is carried out by ACE and neprilysin (membrane metalloendopeptidase, MME). ACE also converts Ang1-7 to Ang1-5. Many of these angiotensin molecules are expressed in parallel to AngII in the heart, kidney and testis, while there is lower expression in the colon, small intestine and ovaries. 2,9---13 Other angiotensin molecules include Ang2-10, derived from AngI and produced by the action of aminopeptidases. Ang2-10 gives rise to Ang2-8 (also called AngIII) and is produced through the action of ACE. Ang3-8 (or AngIV) is produced from AngIII through the action of aminopeptidase (Table 1). Another angiotensin is Ang3-7, which is derived from AngIV. AngII gives rise to AngA, which in turn gives rise to the metabolite alamandine. 9,14---17
Angiotensin-converting enzyme
In addition to renin, another key enzyme in the RAS is angiotensin-converting enzyme (ACE), a peptidase that transforms AngI into AngII. This enzyme has two subtypes, ACE and ACE2, which are expressed in different tissues. ACE converts AngI into AngII in the pulmonary tissue. Other aminopeptidases, such as cathepsins, also favour this process. 13 ACE also interacts with the bradykinin system to degrade bradykinin into inactive peptides.
ACE is a transmembrane protein. Two ACE subtypes have been identified----one expressed in somatic cells and one in germ cells. ACE is expressed in many epithelial cells, especially in lung tissue, renal epithelium cells, adrenal glands, the small intestine and the epididymis. It is coupled to the Gq/11 protein and activates phospholipase C (PLC) by raising the intracellular calcium level and activating protein kinase C (PKC). ACE inhibits adenylate cyclase (AC) and activates tyrosine kinase. 11 ACE2 is a membrane protein with a single transmembrane segment, an intracellular segment, and terminal N and C domains, but with a single enzymatic active site that gives it characteristics different from those of ACE. It is located in kidney, intestine, heart, testis and retina tissue. Rodent models that do not express ACE2 show corneal hyperkeratosis that reverses with losartan treatment. 18,19 Renin---angiotensin system receptors There are specific receptors for each of the RAS molecules (renin, angiotensin), These are described below.
Renin receptor
At the tissue level there is the renin/prorenin receptor, which is activated by renin and renal and extrarenal prorenin. The cleavage of prorenin generates active renin, which can then generate AngI. However, the binding of R and PR to the renin receptor activates intracellular MAPK signalling pathways related to the production of proinflammatory cytokines and cell differentiation processes. 3
Angiotensin receptors 1 and 2
The angiotensin receptors are transmembrane proteins with seven transmembrane domains. Two subtypes have been described: AT1 and AT2. AT1 is a G protein-coupled receptor that increases the intracellular calcium concentration.
There are two subtypes of AT1: subtype A, present in the brain, and subtype B, present in the adenohypophysis and cerebral cortex. The genes that encode for this receptor are located on chromosome 3. The AT1 receptor is the primary target of AngII, but AT1 also binds AngIII and Ang A. AT2 is a G protein-coupled receptor that activates various phosphatases, which eventually activate membrane potassium channels. Additionally, AT2 activation triggers the production of nitric oxide and GMPc. AT2 binds Ang1-7, Ang1-9, AngII, Ang A and AngIII. This receptor is present in foetal tissues, neonatal tissues and adult brain tissue. The gene encoding AT2 is located on the X chromosome. 12 Another related receptor is AT4, which binds AngIV and Ang3-7 (Table 1). In animal models, AT4 expression has been studied in the cerebral cortex, where it is involved in cognitive processes, but it has also been identified in other tissues including kidney, adrenal cortex, lung and heart. Functions include vasodilation and glucose uptake modulation. 14,20
MAS receptors
MAS receptors are transmembrane proteins encoded by the proto-oncogene MAS1 and are coupled to G proteins. This proto-oncogene induces tumours in animal models.
In the RAS, MAS receptor activation decreases the sympathetic tone, blood pressure, chronic hypertension, and fibrosis. By contrast, activation increases parasympathetic tone, baroreflex, vasodilation, nitric oxide production and natriuresis. Ang1-7 is the natural ligand for the MAS receptors. A subtype of the MAS receptor is the MrgD receptor, which shows affinity for alamandine. 10,15,16,21
Classic RAS vs alternative RAS
The RAS is an endocrine axis that includes elements of many origins such as renin, angiotensinogen, ACE, renin receptors and angiotensin (Fig. 1, Table 1). Renin, ACE, AngII and its AT1 receptor have been studied for decades. These molecules are involved in regulating cardiovascular function, and they participate in the development of some chronic pathologies of renal and heart origin, such as high blood pressure and hypertensive heart disease. Together, these molecules make up what is called the classic RAS system, also called the ACE/Ang II axis/AT1 receptor. 11,13,22 The general action of this subsystem has a vasopressor effect, with increased peripheral vascular resistance and water and sodium retention caused by stimulating the release of aldosterone into the adrenal cortex (Fig. 1, Table 1). The alternative RAS consists of some elements common to the classic RAS along with others that counterregulate the actions of the classic RAS (Fig. 1, Table 1). The common elements are renin, angiotensinogen, AngI and ACE. The action of angiotensin-converting enzyme 2 (ACE2) generates a series of molecules with individual activities (Ang1-9, Ang1-5 and Ang1-7); Ang1-7 constitutes the final molecule in this series and acts on the MAS receptor. For this reason, this alternative system is called the ACE2/ Ang1-7/MAS receptor axis. 7,9,12,15,21,23,24 Local renin---angiotensin systems In addition to the endocrine action of the RAS, several studies have shown paracrine and autocrine effects that depend on local actions in many organs and tissues, including heart, kidney, lung, muscle, central nervous system, blood vessel, pituitary gland, adrenal gland, liver, immune system, erythrocytes, digestive tract and adipose tissue. Similarly, an intracellular RAS is proposed, with AT1 receptors present in the mitochondrial membrane. The presence of a RAS in each tissue gives rise to their specific processes.
Adipose tissue
In adipose tissue, AngII increases inflammatory processes, increases the mass of adipose tissue, alters lipogenesis and lipolysis, decreases insulin sensitivity and increases glucose uptake. 25---27
Skeletal muscle
In skeletal muscle, AngII decreases blood flow, decreases insulin-stimulated glucose uptake and alters insulin signalling. In contrast, angiotensin 1-7 increases the insulinstimulated glucose uptake and improves average insulin signalling. 28
Pancreas
In the pancreas, AngII produces a reduction in blood flow, diminishes insulin secretion and increases oxidative stress, leading to increased inflammation. As a counterpart, angiotensin 1-7 decreases inflammation and apoptosis of the pancreatic islets. 29,30 Liver In the liver, the RAS reduces insulin sensitivity and increases fibrosis. There is experimental evidence of the relationship between RAS and non-alcoholic fatty liver. The expression of AngII modifies the insulin receptor, inducing resistance. In addition, activated AT1 favours the production of reactive oxygen species (ROS) and pro-inflammatory cytokines and promotes fibroblast differentiation. 31 At the clinical level, retrospective studies have shown a possible positive association between age, diabetes and the likelihood of developing non-alcohol fatty liver disease, while there is a negative relationship between treatment with statins and/or ARBs and the development of liver fibrosis. 32
Blood vessels
The action of the RAS in blood vessels has been studied experimentally in the retina, where ACE, renin and AT receptors are present. The activation of AT1 generates vasoconstriction of the retinal vessels and increases ROS, decreases endothelial nitric oxide (NO), and increases endothelial dysfunction. AT2 activation produces vasodilation and has anti-inflammatory effects. 33
Kidney
The presence of a local RAS has been found in the kidney. AT1 activation increases ROS production, hypertrophy and inflammation. Therefore, a relationship between intrarenal RAS, HBP and renal parenchyma damage has been suggested. 1,22,35
Lung
In animal models, a relationship has been identified between ACE, AT1 receptors and inflammation, vascular remodelling, and endothelial dysfunction. These damaging processes can lead to cardiopulmonary dysfunction. In these cases, pretreatment with captopril or losartan decreases the pro-inflammatory effect. 36,37 ACE2 is expressed in type II pneumocytes and has a protective anti-inflammatory role. 23,38
Bone marrow
There is experimental evidence for the presence of a RAS in bone marrow. AT1 receptors are expressed in haematopoietic cells, and their stimulation promotes differentiation of erythroid cells, a phenomenon that is blocked by losartan. In pathological conditions, high levels of ACE have been found in patients with leukaemia. 39,40
Nervous system
There is evidence for the expression of an RAS in neurons and glia due to the presence of ACE and the AT1, AT2 and MAS receptors. AT2 and MAS activation decreases the production of inflammatory factors and increases the levels of BDNF and proteins that facilitate phagocytosis. AT1 and MAS activation promotes inflammation, cell death, demyelination and alterations in cellular communication, which modifies plasticity and alters the development of cognitive processes. 41,43 Microglia activation induces cytokine elevation and neuroinflammation. Additionally, preclinical studies have shown a neuroprotective effect of AT2 agonists after a cerebrovascular accident, and it has been suggested that ECAI or ARBs could help to minimise neuronal damage. 21,44 The endogenous agonists of these receptors and their origins remain unknown.
Cellular renin---angiotensin system
Several studies have highlighted the presence of AT1 in the mitochondrial membrane. The presence of this receptor can alter the function of the mitochondrial oxidative chain, which can lead to the production of free radicals and oxidative stress that in turn causes mitochondrial damage. Likewise, excess free radicals can lead to cell destruction. This oxidative phenomenon can be present in different tissues. In the nervous system, inflammatory phenomena in neurons and glia might explain the cell destruction present in some neurodegenerative diseases. Changes related to corneal inflammation resulting in hyperkeratosis and retinopathy have also been reported in preclinical trials. 33 This inflammatory phenomenon is common to many tissues and explains the presence of fibrotic processes in the cardiac, pulmonary, liver and renal systems. 27,32,34
RAS in the physiopathology of chronic diseases
Given the wide distribution of the RAS, it has been investigated for its role not only in cardiovascular diseases but also in other pathologies including metabolic, respiratory and neurological diseases, among others. Much of the evidence comes from preclinical studies where, through experimental genetic, pharmacological and experimental surgery techniques, relationships between the RAS and numerous pathologies have been identified.
RAS and cardiometabolic risk factors
Chronic diseases of cardiometabolic origin represent an important group in the global disease burden. Therefore, special attention has been given to the role of the RAS in cardiometabolic risk factors.
High blood pressure
The activation of AT1 receptors increases the activity of the sympathetic nervous system (Fig. 2). It increases the blood pressure, vasoconstriction, ADH secretion and aldosterone secretion, and it generates cardiac hypertrophy, fibrosis, inflammation and ROS production ( Figs. 1 and 2). In parallel, AT1 activation decreases parasympathetic tone, baroreflex sensitivity, NO production and natriuresis (Fig. 2). 24,45 Opposing effects are seen when AT2 is activated, which causes a decrease in blood pressure, fibrosis and inflammation. AT2 activity increases vasodilation, NO production, baroreflex sensitivity, cardioprotection and natriuresis. 7,34 Additionally, the activation of AT4 increases renal blood flow at the renal cortex and increases vasodilation, NO production and natriuresis. It is also cardioprotective and decreases inflammatory processes. MAS receptors decrease sympathetic tone, blood pressure, chronic hypertension and fibrosis, and increase parasympathetic tone, baroreflex, vasodilation, NO production and natriuresis. 1,8 Obesity Adipose tissue produces pro-inflammatory cytokines and metabolites that are involved in the control of energy metabolism, body weight, glycaemic homeostasis and lipid metabolism. 46,47 Adipose tissue has been shown to express RAS components, AngII, ACE and AT1 (i.e. the Ang II/ACE/AT1 axis), which is hyperactive in metabolic diseases. AngII induces lipogenesis and reduces lipolysis; these processes are related to obesity, insulin resistance and inflammation. This axis is opposed by the alternative RAS (ACE2/Ang1-7/MASR axis), which is also expressed in adipose tissue. Alternative RAS activation induces metabolic actions including increased lipolysis, reduced body weight, an improved lipid profile, the attenuation of metabolic syndrome, increased glucose absorption and a reduction of oxidative stress. Due to these protective responses, the ACE2/Ang1-7/MAS axis counteracts the negative effect of the ACE/Ang II/AT1 axis and has been proposed as a target to reduce obesity and DM2. In diet-induced obesity animal models, a decrease in SREBPs (sterol binding proteins) that are associated with lipogenesis, adipogenesis and cholesterol homeostasis helps to prevent lipotoxicity. In ACE2-knockout models, SREBPs are increased and lipogenesis is enhanced in the liver 48 and skeletal muscle, 49 supporting the hypothesis for the role of ACE2/angiotensin 1-7 in lipid metabolism. 19 Atherosclerosis. The effect of the RAS may be antiatherogenic or pro-atherogenic, and the key elements in this system are AngII, ACE2 and Ang1-7. Depending on which system predominates and whether there is inflammation, the effect will be pro-atherogenic (derived from the activation of the classic RAS) or anti-atherogenic (derived from the anti-inflammatory action of the alternative RAS). 7---9,24,33 Fatty liver. The presence of a hepatic RAS allows the supposition that its activation accompanies the development of non-alcoholic fatty liver disease. AT1 activation generates local changes that induce lipid accumulation and inflammatory phenomena that trigger a liver fibrotic process. 31,32,35
Smoking
In animal models, nicotine exposure has been shown to induce HBP and cardiac changes including hypertrophy and cardiac fibrosis. 50,51 Related studies have not been conclusive in humans, but it is proposed that nicotine can induce an imbalance between the classic RAS and the alternative RAS that increases the likelihood of developing chronic cardiopulmonary disease. 52
Type 2 diabetes mellitus
The mechanisms through which the RAS participates in the development of DM2 are unclear. Preclinical studies have highlighted the concurrency of various phenomena. AT1 has been described to activate extracellular matrix proteases, which stimulate the release of epidermal growth factor (EGF). When EGF interacts with its receptor at the peripheral tissue level, it activates a common intracellular pathway that is also activated by AT1. This pathway includes ERK 1/2 and mTOR/S6K-1 signalling, which phosphorylates insulin-sensitive receptors, resulting in insulin resistance. AT1 activation also affects the pancreas, which decreases pancreatic infusion and increases oxidative stress in pancreatic beta cells, with a consequent reduction in insulin production. These effects are reduced through the administration of ACEI or ARBs. AT1 activation is accompanied by the production of pro-inflammatory adipokines that enhance the effect of the classic RAS. 26---28,53
Kidney fibrosis
The presence of a RAS in the renal parenchyma can also determine kidney function or kidney damage based on the balance between the classic RAS and the alternative RAS. The latter system is related to anti-inflammatory, vasodilator and antifibrotic processes that are opposed to the classic RAS, which favours phenomena related to vasoconstriction, inflammation and fibrosis. The predominance of the classic system at the renal level would explain the fibrotic damage that can lead to chronic renal failure. 1,35,54
Other pathologies
The viral S (spike) protein of the SARS-CoV-2 virus (the virus that causes COVID-19) binds to ACE2 as its receptor for host cell entry. 55 This has revealed the participation of the RAS in hyperinflammation processes, a phenomenon that can be a common result in diseases as diverse as digestive, 56 pulmonary, 36,38 haematological, 57,58 hypothalamic, 41,42 neurodegenerative, 43,44 bone, 40 reproductive, 59 immunological, 60,61 obstetric 62 and cancer diseases. 63,64 This further broadens the RAS field and the research on its role in chronic diseases.
RAS and therapeutic strategies
Given the relationship between RAS and the development of cardiovascular diseases with essential hypertension blood pressure, congestive heart failure and renal insufficiency, possible therapeutic alternatives have been raised, and one of the first targets was ACE (Fig. 1, Table 2). ACE inhibitors (ACEIs), such as captopril, enalapril, lisinopril and fosinopril, first appeared more than 50 years ago. These progressively became incorporated into clinical practice as an alternative to traditional antihypertensive agents, and today ACE inhibitors are considered to be the first line of treatment.
With the widespread use of ACEIs, side effects have been seen in phase IV pharmacological studies; thus, drugs that selectively act on the RAS axis have been identified. As a result, AT receptor blockers (ARBs) were incorporated into clinical practice nearly 30 years ago. These ARBs (e.g. losartan, irbesartan and valsartan) expanded the therapeutic options (Fig. 1, Table 2). In addition, direct renin inhibitors (DRIs) (e.g. aliskiren, remikiren and enalkiren) have been considered as an alternative to intervene in pathologies that involve elevated renin levels. 1,37,45,65 Pharmacological interventions have been aimed at blocking the actions of the classic RAS. However, with the discovery of the alternative RAS, scientists began working on alternatives to stimulate this system, with little success until recently. In preclinical studies, alternative RAS agonists have been evaluated for their preventive or protective effects due to their potential anti-inflammatory, antiproliferative and antifibrotic actions, but the results have been inconclusive. These substances have been employed to study the RAS in preclinical studies in animal models with induced pathologies. 21 Owing to the recent COVID-19 pandemic, novel therapeutics have been proposed. One of these is recombinant human soluble ACE2, which would bind to the virus and prevent its penetration into cells. It would also activate the alternative RAS, thereby increasing the production of Ang1-7. 66
Perspectives
The RAS and its role in body homeostasis has been studied for more than 100 years and has yet to be fully explored. The research conducted thus far has gradually provided information to explain the relationships among the RAS, health and the development of chronic pathologies. Multiple studies have highlighted the role of the RAS in various pathologies that were not previously associated with it, including metabolic, renal, cardiac, hepatic, digestive, endocrine, neurodegenerative, haematological, reproductive and muscular diseases. The relationship between the RAS and the respiratory infection caused by SARS-CoV was established nearly 20 years ago. 18,67,68 This relationship has become more evident in the current COVID-19 pandemic, in which the severity of the clinical manifestations and complications are related to the presence of ACE2 and a hyperinflammatory response, which is a typical response of the classic RAS. 67,69 Furthermore, it has been shown that individuals with comorbidities that involve an imbalance between the classic and alternative RAS (obesity, DM2, hypertension, hypertensive heart disease) are more likely to acquire the infection and are more likely to develop severe and fatal complications. There has been a debate about whether pharmacological interventions in the RAS prevent or promote the development of COVID-19 and its complications. This debate is open and requires extensive research to clarify the possible advantages and disadvantages of current therapeutics and to propose valid alternatives that reduce morbidity and mortality in cases where the RAS is involved. 12. Matsusaka T, Ichikawa I. Biological functions of angiotensin | 2022-02-27T14:11:43.100Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "9c94642be0fd4901d518ae264db7cb7586742e07",
"oa_license": null,
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8882059",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "c181c2e3ba5fd13634c491882c23d993645afbee",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17354245 | pes2o/s2orc | v3-fos-license | Analysis of the structure of the Krylov subspace in various preconditioned CGS algorithms
An improved preconditioned CGS (PCGS) algorithm has recently been proposed, and it performs much better than the conventional PCGS algorithm. In this paper, the improved PCGS algorithm is verified as a coordinative to the left-preconditioned system; this is done by comparing, analyzing, and executing numerical examinations of various PCGS algorithms, including the most recently proposed one. We show that the direction of the preconditioned system for the CGS method is determined by the operations of $\alpha_k$ and $\beta_k$ in the PCGS algorithm. By comparing the logical structures of these algorithms, we show that the direction can be switched by the construction and setting of the initial shadow residual vector.
Introduction
The conjugate gradient squared (CGS) method [13] is one of various methods used to solve systems of linear equations Ax = b, (1.1) same analysis for two improved PCGS algorithms, one of which was mentioned above [8] and the other is presented in the present paper.
In this paper, when we refer to a preconditioned algorithm, we mean one that uses a preconditioning operator M or a preconditioning matrix, and by preconditioned system, we mean one that has been converted by some operator(s) based on M . These terms never indicate the algorithm for the preconditioning operation itself, such as incomplete LU decomposition or by using the approximate inverse. For example, under a preconditioned system, the original linear system (1.1) becomes with the preconditioner M = M L M R (M ≈ A). In this paper, the matrix and the vector under the preconditioned system are denoted by the tilde (˜). However, the conversions in (1.2) and (1.3) are not implemented directly; rather, we construct the preconditioned algorithm that is equivalent to solving (1.2).
This paper is organized as follows. Section 2 provides various preconditioned CGS algorithms; in particular, we consider right-and left-preconditioned systems for CGS algorithms. The improved PCGS algorithms are shown to be coordinative to the left-preconditioned system. Section 3 discusses the difference for PCGS algorithms between the direction of a preconditioning conversion and the direction of a preconditioned system. We show that preconditioning conversions are congruent for PCGS, and we provide some examples in which the direction of the preconditioned system for the CGS is switched. In section 4, we present some numerical results to illustrate the convergence properties of the various PCGS algorithms discussed in section 2, and we illustrate the effect of switching the direction of the preconditioned system for the CGS algorithm in section 3. Finally, our conclusions are presented in section 5.
Analyses of various PCGS algorithms
In this section, four kinds of PCGS algorithms are analyzed. These PCGS algorithms can be derived as follows.
End Do
Any preconditioned algorithm can be derived by substituting the matrix with the preconditioner for the matrix with the tilde and the vectors with the preconditioner for the vectors with the tilde. Obviously, Algorithm 1 without the preconditioning conversion is the same as the CGS method. Ifà is a symmetric positive definite matrix andr ♯ 0 =r 0 , then Algorithm 1 is mathematically equivalent to the conjugate gradient (CG) method [6] under a preconditioned system. The case of (1.3) is called two-sided preconditioning, the case in which M L = M and M R = I is called left preconditioning, and the case in which M L = I and M R = M is called right preconditioning, where I denotes the identity matrix. We now formally define these 1 .
Definition 1 For the system and solutioñ
Ax =b, we define the direction of a preconditioned system of linear equations as follows: • The two-sided preconditioned system: Equation ( Other vectors in the solving method are not preconditioned. The initial guess is given as x 0 , and The two-sided preconditioned system may be impracticable, but it is of theoretical interest. The preconditioned system is different from the preconditioning conversion. There are various ways of performing a preconditioning conversion, but the direction of the preconditioned system is uniquely defined. (For example, see the preconditioning conversions (2.2) and (2.5) in Algorithm 2, Section 2.1.1).
Both the CGS and the PCGS extend the two-dimensional subspace in each iteration [2,5], therefore, the Krylov subspace K 2k (Ã,r 0 ) generated by the k-th iteration forms the structure of
Two typical PCGS algorithms
In this subsection, we present two well-known and typical PCGS algorithms. One is a rightpreconditioned system, although this is not always recognized, and the other is a left-preconditioned system. For both of these algorithms, we examine the structure of their Krylov subspace and the solution vector.
Conventional right-preconditioned PCGS
This PCGS algorithm has been described in many manuscripts and numerical libraries, for example, see [1,12,15]. It is usually derived by the following preconditioning conversion 2 : Finally, Algorithm 2 is derived.
End Do
The stopping criterion is The results of this algorithm can also be derived by the following conversion: . This is the same as using M L = I and M R = M in (2.2). Furthermore, this is the same as converting onlyÃ,x k , andb, that is, the right-preconditioned system.
Left-preconditioned CGS
The following conversion can be used to derive another PCGS algorithm: . This is the same as applying M L = M and M R = I toÃ,x k , andb, that is, the leftpreconditioned system.
End Do
In this paper, r + k denotes the residual vector under the left-preconditioned system 3 , its internal structure is r + k ≡ M −1 r k , and this is the definition of r + k . Note that p + k , u + k , and q + k achieve the same purpose. Here, r + k in Algorithm 3 provides different information to the residual vector r k = b − Ax k , and here, the stopping criterion is Note that this also different from (2.4), and this is an example of incomplete judging, because r + k+1 never provides important information about b − Ax k [7].
This algorithm can also be derived by the following conversion: If M L = M and M R = I are substituted into (2.8), then (2.6) is obtained.
Comparison between two typical PCGS algorithms
Here, we compare the conventional PCGS (Algorithm 2) with the left-PCGS (Algorithm 3); we will focus on the structures of their Krylov subspaces and the solution vectors. The conventional PCGS (Algorithm 2) is the right-preconditioned system 4 , that is, The relation between the Krylov subspace and the solution vector is This means that the Krylov subspace K R 2k AM −1 , r 0 generates the solution vector as M x k , not The relation between its Krylov subspace and the solution vector is Therefore, the Krylov subspace K L 2k M −1 A, r + 0 generates the solution vector directly as x k (Algorithm 3).
These are summarized in Table 1.
It is important to note that the structures are different for the two Krylov subspaces, K R 2k AM −1 , r 0 for the conventional PCGS (the right system) and K L 2k M −1 A, r + 0 of the left-PCGS, because their scalar parameters α k and β k are not equivalent [8,9]. We summarize this here; for details, see [9]. The recurrences of the BiCG under the preconditioned system are Here, R k (λ) is the degree k of the residual polynomial, and P k (λ) is the degree k of the probing direction polynomial, that is,r k = R k (Ã)r 0 andp k = P k (Ã)r 0 . For example, in the left-PCGS,
Improved preconditioned CGS algorithms
An improved PCGS algorithm has been proposed [8]. This algorithm retains some mathematical properties that are associated with the CGS derivation from the BiCG method under a nonpreconditioned system. The improved PCGS algorithm from [8] will be referred to as "Improved1." Another improved PCGS algorithm will be presented, and it will be referred to as "Improved2." We note that Improved2 is mathematically equivalent to Improved1. The stopping criterion for both algorithms is (2.4).
Analysis of the four kinds of PCGS algorithms
We will now analyze the four PCGS algorithms presented above. We split the residual vector of the left-PCGS (Algorithm 3) r + k into and give the necessary deformations; then the left-PCGS (Algorithm 3) is reduced to Improved1 (Algorithm 4). Alternatively, we can derive Algorithm 3 from Algorithm 4 by substituting M −1 r k for r + k , that is, r + k ≡ M −1 r k . By this means, we can explain the relationships between the four kinds of PCGS algorithms, as shown in Figure 1. In addition, if we apply (2.15) to (2.10) for the structure of Krylov subspace of Algorithm 3, then The structure of the solution vector for the Krylov subspace is then Therefore, the system of Improved1 (Algorithm 4) is coordinative to that of the left-PCGS (Algorithm 3), and Improved2 (Algorithm 5) is equivalent to Improved1 (Algorithm 4). Both algorithms have important advantages over the left-PCGS, because their residual vector is r k , and their stopping criterion is (2.4), not r + k+1 / M −1 b . Table 2 shows the structure of the residual vector and the structure of the solution vector for the Krylov subspace for each of the four PCGS algorithms.
In this summary, we see that the structures of the Krylov subspaces differ: improved PCGS (coordinative to the left-preconditioned systems), because the scalar parameters α k and β k are not equivalent [8,9]. Furthermore, there is superficially the same recurrence relation for the solution vector for both the conventional PCGS (Algorithm 2) and Improved2 (Algorithm 5): However, each recurrence relation belongs to a different system, because the components of the conventional PCGS are α R k , u R k , and q R k , and those of Improved2 are α L k , u L k , and q L k . The structure of the residual vector of Improved1 (Algorithm 4) and Improved2 (Algorithm 5) is illustrated as Table 2, because they are both from the left-PCGS and the structure of their Krylov subspace is 3 Congruence of preconditioning conversion, and direction of preconditioned system for the CGS In a previous section, we defined the general direction of a preconditioned CGS system (see Definition 1). However, the direction of a preconditioned system is different from the direction of a preconditioning conversion. We will show that the direction of a preconditioned system is switched by the construction of the ISRV.
Congruence of preconditioning conversion for PCGS
Here, we consider the congruence of a preconditioning conversion for PCGS in the following proposition.
Proposition 1 (Congruency)
There is congruence to a PCGS algorithm in the direction of the preconditioning conversion. ✷ Although this property has been repeatedly discussed in the literature, it should be considered when evaluating the direction of a preconditioned system.
Direction of a preconditioned system and that of the PCGS
The direction of a preconditioned system is different from the direction of a preconditioning conversion.
Proposition 2
The direction of a preconditioned system is determined by the operations of α k and β k in each PCGS algorithm. These intrinsic operations are based on biorthogonality and biconjugacy.
Proof. The operations of biorthogonality and biconjugacy in each PCGS algorithm and the structure of the solution vector for each Krylov subspace are shown below. The underlined inner products are the actual operators for each PCGS.
Only the conventional PCGS (Algorithm 2) algorithm has the ISRV in the form r ♭ 0 ; in all other algorithms, it is r ♯ 0 . The ISRV r ♭ 0 never splits as M −T r ♯ 0 in this algorithm, and the preconditioned coefficient matrix for the biconjugacy is fixed as AM −1 , that is, the right-preconditioned system.
Proposition 3 On the structure of biorthogonality (r ♯ 0 ,r k ) in the iterated part of each PCGS algorithm, there exists a single preconditioning operator between r k (basic form of the residual vector) and r ♯ 0 (basic form of the ISRV) such that M −1 operates on r k or M −T operates on r ♯ 0 .
Proof. We split r ♭ 0 → M −T r ♯ 0 and r + k → M −1 r k in Algorithms 2 to 5, and obtain The underlined inner products are the actual operators for each PCGS. In addition, for the two-sided conversion, we obtain Corollary 1 On the structure of biconjugacy (r ♯ 0 ,Ãp k ) in the iterated part of each PCGS algorithm, there exists a single preconditioning operator between A (coefficient matrix) and r ♯ 0 (basic form of the ISRV), such that M −1 operates on A or M −T operates on r ♯ 0 . Furthermore, there exists a single preconditioning operator between A and p k (basic form of probing direction vector).
From Propositions 2 and 3 and Corollary 1, the intrinsic operations on the biorthogonality and the biconjugacy for the four PCGS algorithms have the same matrix and vector structures, even though the superficial descriptions of these algorithms are different.
ISRV switches the direction of the preconditioned system for the CGS
Although the mathematical properties of the conventional PCGS (Algorithm 2) and Improved2 (Algorithm 5) are quite different, the structures of these algorithms are very similar. This can be seen by replacing M −T r ♯ 0 with r ♭ 0 in Algorithm 5, and in the initial part, we have r ♭ 0 , r 0 = 0, e.g., r ♭ 0 = r 0 ; then Algorithm 5 becomes Algorithm 2.
Theorem 1 The direction of a preconditioned system for the CGS method is switched by construction and setting of the ISRV.
Proof. Proposition 2 shows that the direction of a preconditioned system for the CGS algorithm is determined by the structures of the biorthogonality and the biconjugacy. Here, we show that their structures are switched by the ISRV. The underlined inner products are the actual operators for each PCGS.
• ISRV2 : r ♯ 0 = M T r 0 (Based on right conversion) If we apply ISRV2 to Algorithm 5, then Algorithm 5 is equivalent to Algorithm 2 with r ♭ 0 = r 0 : Alternatively, if we apply r ♭ 0 = M −T M −1 r 0 (we will call this ISRV9) to Algorithm 2, then Algorithm 2 is equivalent to Algorithm 5 with ISRV1: If we change Improved2 (Algorithm 5) to Improved1 (Algorithm 4), then we will obtain the same results.
In the next section, Theorem 1 is verified numerically.
Numerical experiments
Convergence of the four PCGS algorithms of section 2 is confirmed in section 4.1 by evaluating three cases. Furthermore, in section 4.2, the ability of the ISRV to switch the direction of the preconditioned system (as discussed in section 3.3) and Theorem 1 are verified.
Comparison of the four PCGS algorithms
The test problems were generated by building real nonsymmetric matrices corresponding to linear systems taken from the University of Florida Sparse Matrix Collection [3] and the Matrix Market [11]. The RHS vector b of (1.1) was generated by setting all elements of the exact solution vector x exact to 1.0 and substituting this into (1.1). The solution algorithm was implemented using the sequential mode of the Lis numerical computation library (version 1.1.2 [14]) in double precision, with the compiler options registered in the Lis "Makefile." Furthermore, we set the initial solution to x 0 = 0. The maximum number of iterations was set to 1000.
The numerical experiments were executed on a DELL Precision T7400 (Intel Xeon E5420, 2.5 GHz CPU, 16 GB RAM) running the Cent OS (kernel 2.6.18) and the Intel icc 10.1, ifort 10.1 compiler.
In all tests, ILU(0) was adopted as a preconditioning operation with each PCGS algorithm; here, the value "zero" means the fill-in level. The ISRVs were set as r ♭ 0 = r 0 in the conventional PCGS (Algorithm 2), r ♯ 0 = r + 0 in the left-PCGS (Algorithm 3), and r ♯ 0 = M −1 r 0 in Improved1 and Improved2 (Algorithms 4 and 5, respectively).
We considered the following three cases: (a) Evaluating the algorithm relative residual (see Figure 2, 5, and Table 3); (b) Evaluating the true relative residual (see Figure 3, 6, and Table 4); when we have prior knowledge of the exact solution (x exact ); (c) Evaluating the true relative error (see Figure 4, 7, and Table 5).
We adopted the following stopping criteria: For case (a), we adopted the 2-norm of (2.4) for Algorithms 2, 4, and 5, and we adopted the 2-norm of (2.7) for Algorithm 3. For case (b), we adopted b − Ax k+1 2 /||b|| 2 ≤ ε for all algorithms. For case (c), we adopted x k+1 − x exact 2 /||x exact || 2 ≤ ε for all algorithms. We set ε = 10 −12 for all cases. Table 3: (a) Numerical evaluation using the relative residual of each algorithm. N is the problem size, and NNZ is the number of nonzero elements. The three numbers in each row for the column for each method are as follows: the leftmost number is the true relative residual log 10 2-norm, the number in parentheses is the number of iterations required to reach convergence, and the lower number is the true relative error log 10 2-norm. We will first focus on the results of the conventional PCGS (Algorithm 2), as shown in Tables 3 to 5. Breakdown occurs for jpwh 991, and stagnation occurs for olm5000 at pitifully insufficient accuracy 5 , although the other three algorithms (Algorithms 3 to 5) were able to solve them.
Next, it is very important to compare cases (a) and (b) ( Tables 3 and 4) with case (c) ( Table 5), in order to determine the crucial ways in which they differ. Because (a) and (b) can be evaluated without knowing the exact solution but (c) requires the exact solution, it is important to examine the results when the exact solution is known. Comparing the results for bfwa782, poisson3Db, and watt 1 in cases (a) and (b) (Tables 3 and 4), the conventional PCGS (Algorithm 2) has results in which the true relative residual or true relative error (or both) are much less accurate than those obtained by the other algorithms, and only in the conventional PCGS does stagnation occur at insufficient accuracy 6 . In particular, the conventional PCGS is the fastest to converge for watt 1 in cases (a) and (b) (Tables 3 and 4), but this is undesirable, because when it converges too quickly, evaluating by the relative residual and by the true relative residual to satisfy the accuracy. On the other hand, evaluating by the true relative error in the case of (c) ( Table 5), the conventional PCGS converges after almost the same number of iterations as do the other methods.
Next, in contrast, the results of the conventional PCGS with wang4 gave the most accurate true relative error for cases (a) and (b) (Tables 3 and 4), but the conventional PCGS stagnated with wang4, and this resulted in the lowest accuracy for case (c) ( Table 5).
From the graphs in Figures 2 to 7, we can see the following: in case (a), Improved1, Improved2, and the left-PCGS show different convergence behaviors, but in cases (b) and (c), they show similar behaviors. These results correspond to the analysis in section 2.3. Therefore, Algorithms 4 and 5 are coordinative to Algorithm 3 regarding the structures of the solution vector for the generated Krylov subspace, in spite of the difference between the residual vectors: r + k for the left-PCGS (Algorithm 3) and r k for Improved1 and Improved2 (Algorithms 4 and 5, respectively). The conventional PCGS had a convergence behavior that differs from those of all of the other algorithms for all cases (a) to (c).
These numerical results conform to the behavior expected from the discussion of the relation between the structure of solution vector and the Krylov subspace. We compared the numerical results with the theoretical results of sections 2.1.3 and 2.3, and these are summarized as follows: 1. For case (a), the difference between the residual vector r + k of the left-PCGS and r k has been verified.
2. For cases (b) and (c), we verified (2.16): The differences between the conventional PCGS and the left-PCGS, Improved1, Improved2 have been confirmed through their convergence behaviors. That is, the relation of the solution vector and the Krylov subspace between the right system (the conventional PCGS) and the left-PCGS, the coordinative PCGSs to the left-PCGS (Improved1 and Improved2).
Behavior of the PCGS when it is switched by the ISRV
In this subsection, the experimental environment was same as that described in section 4.1, except that we used Matlab 7.8.0 (R2009a), and we gave different ISRVs to the conventional PCGS and Improved1.
In both figures, "Impr1-ISRV2" and "Conv ISRV9" were added to verify Theorem1. The convergence history of "Impr1-ISRV2" is the same as that of "Conventional," and those of "Impr1-ISRV1" and "Conv ISRV9" are the same as that of "Left. " We have numerically verified the discussion in section 3; in particular, we have verified Theorem1.
Conclusions
In this paper, an improved PCGS algorithm [8] has been analyzed by mathematically comparing four different PCGS algorithms, and we have focused on the structures of their Krylov subspace and the solution vector. From our analysis and numerical results, we have verified two improved PCGS algorithms. They are both coordinative to the left-preconditioned systems, although their residual vector maintains the basic form r k , not r + k . For both algorithms, the structures of their Krylov subspace and the solution vector are x k ∈ x 0 + M −1 K L 2k AM −1 , r 0 . Further, the numerical results of the improved PCGS with the ILU(0) preconditioner show many advantages, such as effectiveness and consistency across several preconditioners, have also been shown; see [8].
We presented a general definition of the direction of a preconditioned system of linear equations. Furthermore, we have shown that the direction of a preconditioned system for CGS is switched by the construction and setting of the ISRV. This is because the direction of the preconditioning conversion is congruent. We have also shown that the direction of a preconditioned system for CGS is determined by the operations of α k and β k , and these intrinsic operations are based on biorthogonality and biconjugacy. However, the structures of these intrinsic operations are the same in all four of the PCGS algorithms. Therefore, we have focused on the ability of the ISRV to switch the direction of a preconditioned system, and such a mechanism may be unique to the bi-Lanczostype algorithms that are based on the BiCG method.
As we analyzed the four PCGS algorithms, we paid particular attention to the vectors. We note that there exist preconditioned BiCG (PBiCG) algorithms that correspond to the preconditioning conversion of each of the PCGS algorithms. The polynomial structure of the PBiCG can be minutely analyzed by replacing the vectors of the PCGS. We have analyzed the four PBiCG algorithms in parallel [9], and each PBiCG corresponds to one of the four PCGS algorithms in this paper. In [9], using the ISRV to switch the direction of a preconditioned system was discussed in detail. | 2016-03-01T08:15:52.000Z | 2016-03-01T00:00:00.000 | {
"year": 2016,
"sha1": "af099296aafaae1832758048d9952ddd4a9d988f",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.rinam.2019.100008",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "80b703cd4894b962f01aeba010f3de1384735dcd",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
239001763 | pes2o/s2orc | v3-fos-license | THE EFFECT OF OREGANO ESSENTIAL OIL ON CHICKEN MEAT LIPID OXIDATION AND PEROXIDATION
The study aimed to investigate and evaluate the oxidative stability of chicken thighs with skin stored in freezing conditions due to the effect of oregano essential oil for various times. The results were compared with a control group without the use of oregano essential oil. Samples of chicken thighs with skin were obtained from an experiment performed on a poultry farm in a deep litter breeding system. The results obtained from the application of oregano essential oil to chicken thighs with skin did not show a statistically significant difference (p >0.05) in the dry matter content, fat content and acid value compared to the control group, where coccidiostats were used in starter and growth feed mixtures. A statistically significant difference was found in the peroxide value by applying oregano essential oil to chicken thighs with skin compared to a control group containing coccidiostats in starter and growth feed mixtures when stored for 1 day at room temperature (p £0.01) and 12 months in freezing conditions at -18 °C (p £0.05) and a statistically non-significant difference (p >0.05) when thighs with skin were stored for 6 and 9 months in -18 °C freezing conditions. In the conclusion, it was stated that maintaining the oxidative stability of chicken meat means knowing the factors that affect it and prepare the conditions for its maintenance. Chicken meat is generally susceptible to oxidative damage because it is characterized by a high concentration of polyunsaturated fatty acids. With a sufficient amount of effective antioxidants, chicken meat could be a homoeostatic system, but it remains limited or free of oxidized compounds and reactive components. These questions are the subject of further research in the field of oxidative stability of chicken meat.
INTRODUCTION
In addition to microbial degradation, lipid oxidation is a major cause of quality degradation and one of the main factors limiting the quality and acceptability of meat and meat products. Products that result from the peroxidation of polyunsaturated fatty acids affect meat quality parameters and reduce shelf life. in addition, many of these peroxidation products, including malondialdehyde, are toxic, genotoxic, cause intracellular oxidative stress, and could increase the frequency of tumor and atherosclerosis. Consumption of foods containing malondialdehyde, therefore, poses a risk to human health. The use of antioxidants is one of the main strategies to prevent the oxidation of lipids in meat and meat products. Although most synthetic antioxidants show high efficacy at low concentrations and are classified as a grass, i.e. generally accepted as safe, the acceptable dosage of these substances is low and the potential health risk is limited by their widespread use. In addition, in recent years, consumer concerns about the relation between food composition and human health and the increasing demand for natural-based foods have resulted in the food industry's interest in alternative methods to prevent and slow down lipid oxidation. One of these alternatives is the use of plant extracts, especially essential oils (Moghrovyan et al.,
2019).
Oxidation significantly reduces the shelf life of prepared meat products. The addition of antioxidants to foods can be an effective solution to reduce oxidation and maintain its properties such as softness, juiciness, palatability, and shelf life. Both natural and synthetic antioxidants are available. Synthetic antioxidants, such as butylated hydroxyanisole (BHA) and butylated hydroxytoluene (BHT), can effectively inhibit oxidation-induced adverse changes in meat products, but these antioxidants can have potential genotoxic effects. Therefore, the consumer prefers natural antioxidants (Feng et al., 2017). Oregano essential oil, as well as thyme essential oil, is among the 10 most popular essential oils used as preservatives in food. Both are categorized as GRAS by the Food and Drug Administration. Due to the high content of phenolic compounds, these essential oils are effective antioxidants. Standardization of natural preservatives is needed (Boskovic et al., 2019).
Poultry meat is particularly susceptible to oxidative damage due to its high concentration of polyunsaturated fatty acids. The rate and extent of lipid oxidation in muscle tissue have been found to depend on the degree of muscle tissue damage during handling before the slaughter of broiler chickens, such as stress and physical injury, as well as after slaughter, such as early postmortem conditions, pH and carcass temperature. Under practical conditions, synthetic antioxidants such as butylated hydroxytoluene or butylated hydroxyanisole are often used as antioxidants. However, there is a trend to look for compounds that allow the transition from synthetic to natural antioxidants. The activity of Origanum vulgare L. is mainly attributed to its main components carvacrol and thymol, substances that modify the permeability of the bacterial cell membrane and react with lipid and hydroxyl radicals, which convert them into stable products (Luna et al., 2010). Scientific hypothesis: Oregano essential oil supports the oxidative stability of chicken thigh muscle fat.
MATERIAL AND METHODOLOGY
Technique carried out of the experiment The feeding experiment was carried out under practical conditions on a broiler chicken farm. A hybrid combination of Cobb 500 was used as the final fattening type of broiler chickens. A feeding experiment was performed to obtain samples for chemical analyzes. Each box had dimensions of 2 m wide and 3.8 m long, which is following the principle of unrestricted movement of broiler chickens, i.e. 33.0 kg live weight per m 2 when removed from storage. 100 one-day-old chickens were placed in each group. The experiment in practical conditions lasted 40 days. Broiler chickens were fed from plate feeders and sucked water from hat feeders until the age of 14 days, and later until the end of the experiment, they received a feed from tube feeders and drank water from bucket feeders. Feed mixtures starter, growth, and final were used for feeding, which were the same in the basic raw materials in both the control and experimental groups. The difference in feed mixtures between the control and experimental groups was in the active substance added by the feed supplement in the control group and the additive applied to the thighs after the slaughter of the broiler chickens in the experimental group. The control group used a commercially produced feed mixture starter in chickens aged 1 to 14 days with the feed supplement coccidiostat Maxiban G160, a growth feed mixture in chickens aged 15 to 33 days with a feed supplement coccidiostat Sacox and a final feed mixture aged 34 to 40 days. In the experimental group, the feed mixtures were the same as in the control group, but without coccidiostats. Feed mixtures were manufactured by Biofeed, a.s., Kolárovo, where they were manually mixed from feed materials 4 times in a row.
Sample preparation for laboratory measurements
Broiler chickens were randomly selected at the end of the feeding experiment in the number of 24 pcs from the control group (which means obtaining 48 pcs of thighs) and 24 pcs from the experimental group (i.e. 48 pcs of thighs). Broiler chickens were killed humanely and technologically processed. The thighs were separated from the carcass in a total of 48 pcs in each group. Each thigh was packed in a microtene bag and labeled with K (control group) and P (experimental group). All thigh samples with the skin from the control and experimental groups were boned. The thigh samples with the skin of the control group were packed in a microtene bag and marked K (control group) and further marked 1, 6, 9, and 12, which means 12 thighs for analysis depending on the different storage time (1 means 1 day after slaughter the broiler chickens and kept at room temperature, 6, 9 and 12 means 6, 9 and 12 months after the slaughter of broiler chickens and kept under freezing storage conditions at -18 °C.
Application of oregano essential oil to the chicken thighs in the experimental group and their storage
Oregano essential oil was applied to each thigh sample with the skin of the experimental group. A thigh sample was placed in a porcelain mortar and 4.0 mL of oregano essential oil was applied to the surface of the entire thigh. Oregano essential oil was applied with a syringe and handthoroughly spread over the entire thigh surface using surgical gloves. Thigh samples with the skin of the experimental group were packed in a microtene bag and marked P (experimental group) and another designation 1, 6, 9, and 12, which means 12 thighs for analysis depending on different storage time (1 means 1 day after slaughter broiler chickens and application of oregano essential oil and stored at room temperature, 6, 9 and 12 means 6, 9 and 12 months after the slaughter of broiler chickens and application of oregano essential oil and stored in freezing conditions -18 °C. Each thigh with skin wrapped in a microtene bag and labeled was stored under the specified conditions according to the scheme in Table 1. Oregano essential oil was obtained as a commercial additive from Calendula, a. s., Nová Ľubovňa. Oregano essential oil was accompanied by an A-test. The main components of oregano essential oil according to Özkan et al. (2017) are carvacrol (63.97%), p-cymene (12.63%), linalool (3.67%), α-terpineol (2.54%), and (-)-terpinen-4ol (2.24%).
Examined indicators
A feeding experiment was performed to obtain samples for chemical analysis. Chemical analysis of thigh samples was aimed at determining: (a) the dry matter of thigh muscle with skin depending on the storage period, (b) the fat of thigh muscle with skin depending on the storage period, (c) the acid value of thigh muscle with skin depending on the storage period, (d) the peroxide value of the thigh muscle with the skin depending on the storage period.
Weighing of samples
All samples and laboratory equipment requiring weighing for chemical analyzes of the thigh were performed on automatic scales of the Kern 440-49N type with an accuracy of d = 0.01 g. Note: thigh sample: boned thigh with skin and in the experimental group with oregano essential oil application, 1 day after slaughter of broiler chickens and in the experimental group after application of oregano essential oil, stored samples at room temperature, 6 months after the slaughter of broiler chickens and in the experimental group after application of oregano essential oil, stored samples at under freezing conditions -18 °C, 9 months after the slaughter of broiler chickens and in the experimental group after application of oregano essential oil, stored samples at under freezing conditions -18 °C, 12 months after the slaughter of broiler chickens and in the experimental group after application of oregano essential oil, stored samples at under freezing conditions -18 °C, weight of sample for chemical analysis -weighed 70 g from each mixed thigh with skin.
Grinding and homogenization of samples
The AOAC Official Method 983.18 which is Codex-Adopted -AOAC Method especially for Meat and Meat Products was used for the samples preparation. Before each chemical analysis, each sample was individually ground and homogenized in a Grindomix 200 laboratory mixer and quantitatively transferred to a glass laboratory beaker. 70 g of the homogenized fresh mass of each thigh sample was weighed.
The procedure for analyzed sample drying
Sea sand was poured to the bottom of the petri dish. A petri dish with sea sand was weighed. The homogenized sample was transferred from the mixer to sea sand of the petri dish and the sample thus prepared was weighed. The prepared samples with Petri dish and sea sand for chemical analysis were dried in an oven of type HS 62A at a temperature of 103 ±2 °C for 12 hours to constant weight. The dried samples together with the petri dish and sand were transferred to a desiccator using laboratory tongs. The samples in the desiccator were allowed to cool. Cooled samples with a petri dish and sand were weighed. The difference between the weight of the dried sample with the petri dish together with the sea sand and the weight of the petri dish together with the sea sand represented the sample for chemical analysis. The weight of each dried thigh sample with skin ranged on average 19.3 g.
Dry matter per fresh mass
The standard reference method (AOAC Method 950.46) for measurements of moisture in meat was used. Analyzed samples of thigh muscle of 5.0 g were placed in the pre-weighed aluminum dishes ≥50 mm diameter and ≤40 mm deep then placed in the drying oven (type: HS 62A) at temperature 103 ±2 °C for 4 hours to a constant weight. Samples in aluminum dishes were partially dried covered. Samples with covered aluminum bowls after drying were transferred to the desiccator to cool. Then the dishes and their dried samples were reweighed. Calculation of dry matter per fresh mass of thigh muscle with skin and dry matter of analyzed sample: a Dry mater = ------x 100 (1) n where: dry matter per fresh mass and dry matter of the analyzed sample (same calculation procedure), %, a = weight of the dried sample, g, n = sample weight, g.
The procedure for analyzed sample extraction
The fat content of the samples was determined 1 day after the slaughter of the broiler chickens, 3, 6, 9, and 12 months after the slaughter of the broiler chickens. The samples were stored for 1 day at room temperature, 6, 9, and 12 months in freezing conditions at -18 °C. The analyzed sample (dried) was extracted with boiling petroleum ether, the fat is determined by evaporation of the extraction system by mass. The AOAC Official Method 991.36 for Fat (Crude) in Meat and Meat Products was used. In this experiment were used petroleum ether as a solvent and Det-gras N apparatus, Model 4002842 -capable of simultaneous extraction of 6 test portions.
Procedure for determining the fat of the analyzed sample Empty aluminum cartridges were weighed, and 6 pcs were needed for one analysis. Each assay sample was first mixed in a Grindomix 200 laboratory mixer from which 12 g was weighed. Each weighed sample was transferred to an extraction cellulose cartridge, which was sealed with cotton wool. The filled sample cartridge was placed in an aluminum cartridge and connected to a Detras N-type extractor. The operating temperature on the apparatus was 120 °C and the fat extraction itself took 60 minutes. The fat extraction from the analysis sample was into the aluminum cartridge of the instrument. After the fat extraction was complete, the remaining petroleum ether was evaporated from the fat sample and each aluminum fat cartridge was weighed. The difference between the weight of the aluminum grease cartridge and the weight of the empty aluminum cartridge was considered to be the fat of the analysis sample. The weight of each fat analysis sample ranged from 8.55 to 9.97 g. Calculation of the fat for an analysed sample of the thigh muscle with skin: a Fat = ------x 100 (2) b where: fat of the analysed sample, %, a = weight of fat extracted, g, b = weight of the analysed sample, g.
Acid value
The acid value was determined in fat samples performed 1 day after the slaughter of the broiler chickens, 6, 9, and 12 months after the slaughter of the broiler chickens. The samples were stored for 1 day at room temperature, 6, 9, and 12 months in freezing conditions at -18 °C. The acid value expresses the quantity of potassium hydroxide needed to neutralize the free fatty acids present in 1.0 g of fat (mg KOH.g -1 ). The acid number of the fat expresses the amount of potassium hydroxide (KOH) in mg needed to neutralize the fatty acids in 1 g of the sample. The acid value is the quantity of potassium hydroxide (KOH) in mg necessary to neutralize the fatty acids in 1 g of sample. RCOOH + KOH → RCOOK+ H2O (3) Often, the acid value is converted to a free fatty acid (FFA) content by multiplying the acid value with a factor that equals the molecular weight of the fatty acid concerned (usually oleic acid, MW = 282.4) divided by ten times the molecular weight of the potassium hydroxide (56.1). This factor ten stems from the fact that the acid value is expressed as mg.g -1 , whereas the free fatty acid content is expressed as a percentage. When the free fatty acid content is expressed as 'wt% oleic acid,' this factor, therefore, equals 0.50. When examining the risk from food fat, the acid value expresses the degree of hydrolysis of fats. It is an indicator for evaluating the condition of processed food raw material and its state in the storage conditions.
Principle of determining the acid value
The principle of determining the acid value is to dissolve the fat in the extract with an ethanol-diethyl ether mixture using alkalimetric titration in the presence of phenolphthalein.
Procedure for determining the acid number of fat
First, 25.0 mL burettes and extraction flasks were prepared. 2.5 g of fat was weighed into each extraction flask, and a 1:1 mixture of ethanol-diethyl ether in a volume of 25 mL was added. Before titration, 2 drops of phenolphthalein indicator were added to the mixture in the extraction beaker. Each extraction beaker was slightly heated with a laboratory gas burner. The burette was made up to 25.0 mL with potassium hydroxide solution before each measurement of the sample. The titration was completed when the sample in the extraction flask acquired a pale pink color, which had a shelf life of 30 seconds. Calculation:
Peroxide value
Fat peroxide value was determined in fat samples performed 1 day after slaughter of broiler chickens, 6, 9, and 12 months after the slaughter of broiler chickens. The samples were stored for 1 day at room temperature, 6, 9, and 12 months in freezing conditions at -18 °C. The peroxide number is an indicator to examine the oxidation of fats. It expresses the amount of active oxygen in 1.0 g of fat (µmol O2.g -1 ). The peroxide value is defined as the amount of peroxide oxygen per 1 kg of fat or oil. Traditionally this was expressed in units of milliequivalents, although if we are using SI units then the appropriate option would be in mmol per kg (N.B. 1 milliequivalents = 0.5 mmol; because 1 mEq of O2 = 1 mmol/2 = 0.5 mmol of O2, where 2 is valence). The unit of milliequivalent has been commonly abbreviated as mequiv or even as meq. The peroxide value is determined by measuring the amount of iodine that is formed by the reaction of peroxides (formed in fat or oil) with iodide ions. 2 I − + H2O + HOOH -> HOH + 2OH − + I2 (5) The base produced in this reaction is taken up by the excess of acetic acid present. The iodine liberated is titrated with sodium thiosulphate. 2S2O3 2− + I2 -> S4O6 2− + 2 I − (6) The acidic conditions (excess acetic acid) prevent the formation of hypoiodite (analogous to hypochlorite), which would interfere with the reaction. The indicator used in this reaction is a starch solution where amylose forms a blue to the black solution with iodine and is colorless where iodine is titrated. A precaution that should be observed is to add the starch indicator solution only near the endpoint (the endpoint is near when fading of the yellowish iodine color occurs) because at high iodine concentration starch is decomposed to products whose indicator properties are not entirely reversible.
Principle of determination of peroxide value
The principle of determining the peroxide value is the determination of iodine by titration during its release from iodide by hydroperoxide-unsaturated lipids in an acidic medium.
Procedure for the determination of the peroxide value
Erlenmeyer flasks with ground glass joint and glass stoppers and a starch indicator were prepared. 5.0 g of starch was weighed, to which 30.0 mL of water was added. The mixture was mixed thoroughly. 2.0 g of fat was weighed and transferred to each Erlenmeyer flask. Chloroform (10.0 mL) was added to the fat. The flask was then closed and the contents were shaken well by hand until the fat was completely dissolved. Saturated 1.0 mL aqueous potassium iodide solution and 15.0 mL concentrated acetic acid were added to the dissolved sample. The flask was immediately closed and the contents were manually shaken again for 1 minute. Following this procedure, the sample was stored in a closed flask in a dark place at room temperature with the windows covered with blinds for exactly 5 minutes. Then 75.0 mL of water was added to the sample in the flask and again the contents were shaken thoroughly by hand in a closed flask. After mixing, a prepared starch indicator of 5.0 mL (starch solution) was added to the contents of the flask. The sample thus prepared was titrated with a solution of sodium thiosulfate 0.01 mol.L -1 c (Na2S2O3 x 5 H2O). Its concentration was determined by discoloration. In parallel with the determination of the peroxide value of the fat sample, a blank experiment of the fat-free sample was performed. Calculation:
Statistical analysis of results
The data obtained by the measurements were statistically evaluated according to statistical characteristics such as mean (M), average value, and standard deviation (SD). The difference in the indicator between the groups with each other, i.e. two dependent selections of control and experimental groups were evaluated by Student's t-test. The value of the t-test was commented as statistically significant p £0.05 and p £0.01, and statistically nonsignificant p >0.05. The statistical evaluation of the results was by the methods of the SAS program package version 8.2.
Dry matter in thigh muscle with skin
The average dry matter content in the thigh muscle with skin 1 day, 6, 9, and 12 months after the slaughter of broiler chickens is given in Table 2. 28.72 ±1.52 Note: n = multiciplity, M = mean, SD = standard deviation, numerical values in the column at t-test marked -: statistically non-significant difference between the control and experimental group (p >0.05).
In recent years, it has become common use prophylactic drugs and antibiotic growth stimulants in the feeding of broiler chickens and other farm animals. However, the continued use of these compounds has led to results such as the development of resistant bacteria and antibiotic residues in meat and other animal products that pose a risk to public health and the environment. This concern has led many countries, including European Union countries, to limit the use of antibiotics in feed. Therefore, it is necessary to identify safe alternatives to feed antibiotics (Jazi et al., 2020).
In our study, we focused on the verification of oregano essential oil by application to chicken thighs in relation to oxidative stability depending on the storage time under freezing conditions. Positive results in the application of essential oils whether in feed or on the product itself, are achieved by the effects of bioactive substances contained in the essential oils. These bioactive compounds include hydrocarbons, phenols, esters, alcohols, acids, and steroids, which have a positive effect on the health of broiler chickens and their performance or meat quality (Shirani et al., 2019). Sharma et al. (2020) state that essential oils can influence the performance and physiological processes of broiler chickens, both quantitatively and qualitatively. The aim of the study by Sabikun et al. (2019) was to investigate the post-mortem effects on physico-chemical properties and oxidative stability of chicken thigh and breast muscles depending on the storage time in freezing conditions. Chicken breast and thigh muscles were obtained from 24 broiler chickens and measured from slaughter for 30 minutes or 1.5 hours. They were immediately frozen at -75 °C and then stored at -20 °C for 90 days to measure meat quality characteristics. The results showed that longer freezing led to the deterioration of meat quality and greater deterioration of post-rigid frozen muscles. Frozen muscles before stiffening had a higher pH, water holding capacity (about 90%) and sarcomere length with lower melting point and boiling loss than muscles after solidification. The thigh muscles had better physico-chemical properties than the breast muscles, except for loss during cooking. Therefore, immediate freezing could be an effective way to minimize the deterioration of the quality of frozen chicken.
The condition of rigor-mortis or post-mortem causes various physical and biochemical changes in the muscles through the activity of the proteolytic system after the slaughter of broiler chickens. This proteolytic system is associated with qualitative features such as sensitivity, juiciness, water retention, taste, protein, and structural muscle degradation (Carvalho et al., 2017). When frozen chicken meat is thawed, the muscles lose weight and lose a lot of water, leading to a reduction in the juiciness of the meat. Their research aimed to evaluate preand post-rigor frozen muscles concerning the quality of chicken meat. The research focused on changes in muscle pH, color, water retention capacity, heating losses, cooking losses, shear strength, sarcomere length, and oxidative stability before and after solidification of frozen thigh and breast muscles during different storage times for 90 days in freezing conditions (-20 °C). Based on the results, a conclusion was formulated. Prolonged post-mortem aging before freezing could have adverse effects on the quality characteristics of frozen chicken muscles. The results showed positive effects of freezing before the onset of rigor-mortis (within 30 minutes after the slaughter of broiler chickens) on the quality of chicken muscles. Frozen muscles before solidification had a higher pH, lower dripping, shorter thawing, and less cooking losses than frozen muscles after solidification (Sabikun et al. (2019). We recorded the average dry matter content of the thigh muscle with skin 1 day after the slaughter of broiler chickens and kept at room temperature in the experimental group of 27.21 g.100 g -1 compared with the average dry matter content of the thigh muscle with skin in the control group of 26.91 g.100 g -1 . The difference in dry matter content with skin 1 day after the slaughter of broiler chickens and kept at room temperature was not statistically significant (p >0.05) between the control and experimental groups. Based on the statistical evaluation of the results expressed by the standard deviation, we can state that the measured values of dry matter content in thigh muscle with skin kept at room temperature 1 day after the slaughter of broiler chickens fluctuated more in the control group compared to the experimental group (SD = 2.03 versus SD = 1.85).
Freezing is the preferred way to preserve food on the world meat export market at more than $ 13 billion a year. Despite their ability to maintain the quality and safety of meat, problems with freezing and thawing cycles remain a major problem for processors and consumers. Temperature fluctuations or abuse caused by the freeze-thaw cycle stimulate lipid oxidation and accelerated discoloration of the meat surface. Temperature fluctuations, which are a major problem in the cold-chain meat industry, especially in developing countries, are related to physiological and biochemical changes in animal muscles. The storability of meat is usually determined by its appearance, structure, color, aroma, microbial activity, and nutritional value and is affected by frozen storage and subsequent thawing. The main damage to frozen meat during storage is caused by the breakdown of fats and proteins. The meat industry has a strong interest in verifying a meat product that is actually fresh or that has been previously frozen, due to the large price differences between fresh, frozen, and thawed meat. It is difficult for consumers to detect changes in the quality of meat that occur in some products when they have been frozen. Most research on freezing and thawing red meat focuses on reducing moisture loss. Water loss is directly proportional to water retention capacity by muscle proteins, and reduced water content alters key quality parameters such as color and texture. The freeze-thaw cycle is a major contributor to reduced water retention capacity and unacceptably low water retention capacity causes great deterioration of products. However, limited progress has been made in understanding the real mechanisms for changing meat quality in freeze-thaw cycles. The reduction in water retention capacity is directly related to the denaturation of proteins in the muscle fiber structure. Protein oxidation due to freeze-thaw cycles has been largely ignored, especially in the commercial broiler chicken production chain. Therefore, studies are needed to understand the effect of freeze-thaw cycles on protein stability and its relation to lipid and protein oxidation. The result of increasing freezing and thawing cycles is a higher degree of oxidation of lipids and proteins, as evidenced by a higher content of malondialdehyde and carbonyl compounds and a lower content of sulfhydryl groups. Repeated freeze-thaw cycles increase lipid and protein oxidation and reduce water retention and color fastness of chicken meat (Ali et al., 2015). We recorded an average dry matter content in the thigh muscle with skin stored for 6 months in freezing conditions at -18 °C after the slaughter of broiler chickens in the experimental group of 28.31 g.100 g -1 . This value is slightly higher than compared to the control group of 27.17 g.100 g -1 . The difference in dry matter content of the thigh muscle with skin stored for 6 months in freezing conditions at -18 °C after the slaughter of broiler chickens was not statistically significant (p >0.05) between the control and experimental groups. By statistical evaluation of the results expressed by the standard deviation, we found that the measured values of the dry matter content of thigh muscle with skin stored for 6 months in freezing conditions at -18 °C after the slaughter of broiler chickens fluctuated more in the experimental group compared to control group (SD = 1.53 vs. SD = 1.44).
The average value of dry matter content in the thigh muscle with skin was measured in the experimental group of 28.26 g.100 g -1 for 9 months of our storage in freezing conditions at -18 °C after the slaughter of broiler chickens and in the control sample 27.89 g.100 g -1 . The value of the experimental group was slightly higher. The difference in dry matter content of the thigh muscle with skin stored for 9 months in freezing conditions at -18 °C after the slaughter of broiler chickens was not statistically significant (p >0.05) between the control and experimental groups. According to the statistical evaluation expressed by the standard deviation, we found that the measured values of dry matter content in the thigh muscle with skin, which were stored for 9 months in freezing conditions at -18 °C after the slaughter of broiler chickens fluctuated more in the experimental group than in the control group (SD = 2.01 vs. SD = 1.41). After 12 months from the slaughter of broiler chickens kept in freezing conditions at -18 °C, the average dry matter content in the thigh muscle with the skin of the experimental group was measured 28.72 g.100 g -1 and in the control sample 28.41 g.100 g -1 . The difference in the dry matter content of the thigh muscle with skin stored for 12 months in freezing conditions at -18 °C after the slaughter of broiler chickens was not statistically significant (p >0.05) between the control and experimental groups. Statistical evaluation of the results, which are expressed by the standard deviation, we found that the measured values of the dry matter content of the thigh muscle with skin stored for 12 months in freezing conditions at -18 °C after the slaughter of broiler chickens was lower in the experimental group compared to control group (SD = 1.36 vs. SD = 1.52). To compare the dry matter content in the thigh muscle of our results, we selected the results of Haščík et al. (2012) for a hybrid combination of Ross 308 broiler chickens. Their results are higher compared to ours. They recorded dry matter content 31.51 g.100 g -1 in the control group in which the broiler chickens were fed a feed mixture with coccidiostats and 30.21 g.100 g -1 or 29.88 g.100 g -1 in the experimental group with pollen extract with different doses.
In an experiment with broiler chickens, Semjon et al. (2020) investigated by chemical analysis the dry matter content in the thigh muscle of the Cobb 500 hybrid combination, which measured slightly lower values compared to our results. These authors report 26.21 g.100 g -1 and 26.04 g.100 g -1 of dry matter content in the thigh muscle. Their object of research with broiler chickens was different doses of humic substances in feed mixtures.
Fat in thigh muscle with skin
The average fat content in the thigh muscle with skin 1 day, 6, 9, and 12 months after the slaughter of broiler chickens is given in Table 3. The fat content of chicken meat is closely related to the nutrition and feeding of broiler chickens. The lipid profile in these tissues can be assumed to reflect the lipid profile of the feed (type and dose) (Sirri et al., 2003). The interactions that take place between the nutrients that make up the feed and the synthesis and activity of lipogenic enzymes are responsible for a wide range of lipid storage options in adipose tissue. In addition, the biological activity of some fatty acids stimulates or inhibits specific lipogenic genes encoding enzymes (Jump, 2002). Sierżant et al. (2018) state in their study that meat obtained from broiler chickens is a source of the high biological value of animal proteins and is therefore valued for the consumer. On the other hand, the consumer prefers chicken meat due to its low-fat content and low energy value compared to meat from large livestock.
Fats are vital substances for proper human nutrition. In addition to providing energy for the body's biological processes, fats contain large amounts of substances such as essential fatty acids or fat-soluble vitamins that only a diet can provide. On the other hand, when obtaining fats from food, it is necessary to know the conditions for their qualitative attributes (Purriños et al., 2011), in our case chicken thigh muscle, such as the focus of this study in the methodology.
Fats are involved in processes that affect the taste of meat and contribute to improving its tenderness and juiciness (Amaral et al., 2018).
Therefore, the fat content and its composition are crucial for consumers due to their importance for meat quality and nutritional value (Wood et al., 2004).
However, fats are prone to degradation. Fat oxidation is a major non-microbial cause of deterioration in the quality of meat and meat products (Lorenzo et al., 2012).
Degradation of the chicken begins with the killing of the broiler chicken and continues gradually until the final product is consumed (Chaijan et al., 2017). Therefore, all stages of the processes of handling broiler chickens and chicken meat, processing and storage of chicken meat must be carefully controlled to avoid possible spoilage (Richards, 2006) and to minimize economic losses in the meat industry (Králová, 2015). We recorded the average fat content of the thigh muscle with skin 1 day kept at room temperature in the experimental group of 2.77 g.100 g -1 after the slaughter of broiler chickens compared to the average fat content in the thigh muscle with the skin of the control group of 2.56 g.100 g -1 . The difference in the fat content of the thigh muscle with the skin was not statistically significant (p >0.05) between the control and experimental groups. Based on the statistical evaluation of the results expressed by the standard deviation, we can state that the measured values of fat content in the thigh muscle with skin kept at room temperature 1 day after the slaughter of broiler chickens fluctuated more in the control group compared to the experimental group (SD = 0.16 vs. SD = 0.09). In thigh muscle with skin stored for 6 months in freezing conditions at -18 °C after the slaughter of broiler chickens, the average fat content in the experimental group was recorded at 2.57 g.100 g -1 . This value is lower compared to a control sample of 2.68 g.100 g -1.
The difference in fat content in the thigh muscle with skin stored for 6 months in freezing conditions at -18 °C after the slaughter of broiler chickens was not statistically significant (p >0.05) between the control and experimental groups. Statistical evaluation of the results expressed by the standard deviation, we found that the measured values of thigh muscle fat content with skin stored for 6 months in freezing conditions at -18 °C after the slaughter of broiler chickens fluctuated more in the control group compared to the experimental group (SD = 0.15 and SD = 0.09). We found the average value of fat content with skin stored for 9 months in freezing conditions at -18 °C after the slaughter of broiler chickens in the experimental group 2.66 g.100 g -1 and the control group slightly higher 2.72 g. 100 g -1 . Statistical evaluation of the results expressed by the standard deviation, we found that the measured values of thigh muscle fat content with skin stored for 9 months in freezing conditions at -18 °C after the slaughter of broiler chickens fluctuated more in the experimental group compared to the control group (SD = 0.23 against SD = 0.18).
After 12 months from the slaughter of the broiler chickens when the thigh muscles were stored in freezing conditions at -18 °C, the average fat content in the thigh muscle with the skin of the experimental group was found 2.52 g.100 g -1 versus control group 2.76 g.100 g -1 . The difference in the fat content of the thigh muscle with skin stored for 12 months under freezing conditions at -18 °C was not statistically significant (p >0.05) between the control and experimental groups. By statistical evaluation of the measured results, which are expressed by the standard deviation, we found that the values of fat content in the thigh muscle with the skin of the experimental group stored for 12 months in freezing conditions at -18 °C fluctuated more compared to control samples (SD = 0.19 vs. SD = 0.13).
Freeze storage cannot prevent oxidative degradation and microbial or enzymatic degradation (Jay et al., 2005).
It has been known in the past, as is the case today, that chemical preservation methods for meat are quite beneficial in combination with refrigeration to optimize stability, product quality while maintaining freshness and nutritional value (Cassens, 1994).
As reported by Ali et al. (2015), the main degradation of frozen meat during storage is caused by the processes of fat and protein degradation. It is through these changes that it is possible to find out whether a given product is fresh or has been frozen before. Especially in the meat industry, this is important from a price point of view, as fresh products have a higher price than frozen or thawed products.
All foods that contain lipids, even in very small amounts (<1%), can undergo oxidation, leading to yellowing (Wąsowicz et al., 2004). Common livestock species, such as beef and lamb (ruminants), contain higher levels of saturated fatty acids in muscle mass and adipose tissue compared to chicken meat, which contains higher levels of polyunsaturated fatty acids (Wood et al., 2008).
Acid value in the thigh muscle with skin
The average acid value in the thigh muscle with skin 1 day after the slaughter of broiler chickens, 6, 9, and 12 months from the slaughter of broiler chickens is given in Table 4. .26 ±1.16 Note: n = multiciplity, M = mean, SD = standard deviation; numerical values in the column at t-test marked -: statistically non-significant difference between the control and experimental group (p >0.05).
Leonel et al. (2007)
analyzed breast and thigh muscle. They considered that higher levels of fat found in the thigh muscle compared to the breast muscle could indicate a higher probability of oxidative processes in the thigh muscle. The different fat content in the thigh muscle and the breast muscle is not related to the oxidative processes of fat.
These conclusions were reported in the study by Gardini (2000).
However, there is a more recent study which states that muscles with a higher fat content show a greater tendency to oxidize through a continuous chain reaction of free radicals (Ruban, 2009).
The main goal of the meat industry and food safety researchers is to understand the mechanisms of fat oxidation and to identify the most effective methods of managing this process (Domínguez et al., 2018). Lipid oxidation leads to a deterioration of some quality characteristics of the meat, such as taste, texture, and color, and also reduces the shelf life along with the formation of some toxic compounds (Mohamed et al., 2008).
A similar view is reported by Ali et al. (2015).
We recorded the average acid value in the thigh muscle with skin for 1 day stored at room temperature after the slaughter of broiler chickens in the experimental group of 4.31 mg KOH.g -1 compared to a control sample of 4.68 mg KOH.g -1 . The difference in the average value of the acid value of thigh muscle fat with skin for 1 day kept at room temperature after the slaughter of broiler chickens was not statistically significant (p >0.05) between the control and experimental group. Based on the statistical evaluation of the results expressed by the standard deviation, we can state that the measured acid value of thigh muscle fat with skin kept at room temperature 1 day after slaughter of broiler chickens fluctuated slightly more in the control group compared to the experimental group (SD = 1.16 vs. SD = 1.13).
In thigh muscle with skin stored for 6 months in freezing conditions at -18 °C after the slaughter of broiler chickens, we recorded the average acid value in the experimental group of 6.08 mg KOH.g -1 . This value is lower compared to the control group of 6.46 mg KOH.g -1 . The difference in the acid value in thigh muscle with skin stored for 6 months in freezing conditions at -18 °C after the slaughter of broiler chickens was not statistically significant (p >0.05) between the control and experimental group. Statistical evaluation of the results expressed by the standard deviation, we found that the measured values of the acid value of thigh muscle with skin stored for 6 months in freezing conditions at -18 °C after the slaughter of broiler chickens fluctuated more in the experimental group compared to the control group (SD = 1.19 vs. SD = 0.88).
In the thigh muscle stored under freezing conditions at -18 °C 9 months after the slaughter of broiler chickens, the average acid value was measured in the experimental group of 7.21 mg KOH.g -1 and the control sample 7.68 mg KOH.g -1 . No statistically significant difference (p >0.05) was found between the control and experimental group in the in acid value of thigh muscle with skin stored for 9 months in freezing conditions at -18 °C after the slaughter of broiler chickens. By statistical evaluation of the results expressed by the standard deviation, we found that the measured acid values of thigh muscle fat with skin stored for 9 months in freezing conditions at -18 °C after the slaughter of broiler chickens fluctuated more in the experimental group compared to the control group (SD = 1.18 vs. SD = 1.07).
In thigh muscles stored for 12 months in freezing conditions at -18 °C after the slaughter of broiler chickens, we recorded an average acid value in the experimental thigh muscle group with the skin of 9.26 mg KOH.g -1 and the control group 9.78 mg KOH.g -1 . The difference in measured acid values of thigh muscle with skin stored for 12 months in freezing conditions at -18 °C after the slaughter of broiler chickens was not found statistically significant (p >0.05) between the control and experimental group. Statistical evaluation of the results, which are expressed by the standard deviation, we found that the measured acid values of thigh muscle with skin stored for 12 months in freezing conditions at -18 °C after the slaughter of broiler chickens fluctuated more in the control group compared to the experimental group (SD = 1.23 and vs. SD = 1.16).
It is well known that unsaturated fatty acids and oxygen are components that react during the fat oxidation process. In addition, other components may promote or prevent oxidative reactions. Fats can be oxidized in three main ways, which involve complex reactions: autoxidation, enzymatically catalyzed oxidation, and photooxidation. Of the three mechanisms, the most important process of lipid oxidation in meat is autoxidation, which is a continuous chain reaction of free radicals (Cheng, 2016).
Enzymatic and photooxidation mechanisms differ from autoxidation only by the formation of hydroperoxides during the initiation phase (Chaijan et al., 2017). The mechanism of free radicals, despite the explanation of many of the changes observed in meat, does not provide a detailed and complete description of the changes induced in the reactants and derived products during the oxidation process. The main challenge, therefore, is to complete a scheme that can fully explain all the agents, intermediates, and reactions in fat oxidation (Ghinimi et al., 2017).
Peroxide value in thigh muscle with skin
The average peroxide value in the thigh muscle with skin 1 day, 6, 9, and 12 months after the slaughter of broiler chickens is given in Table 5. We found the average peroxide value in the thigh muscle with the skin after 1 day of storage at room temperature after the slaughter of the broiler chickens in the experimental group of 0.99 µmol O2.g -1 compared with the control group of 1.11 µmol O2.g -1 . The difference in peroxide value in the thigh muscle with skin for 1 day kept at room temperature after the slaughter of broiler chickens was statistically significant (p £0.01) between the control and experimental groups. Based on the statistical evaluation of the results expressed by the standard deviation, we can state that the measured values of peroxide value in the thigh muscle with skin kept at room temperature 1 day after the slaughter of broiler chickens fluctuated more in the control group compared to the experimental group (SD = 0.52 vs. SD = 0.43).
An increase in hydroperoxides is observed in the initial stages of fat oxidation because the level of formation is higher than during decomposition. These compounds are unstable. The process of decomposition of hydroperoxides is larger than the process of formation in more advanced stages of oxidation, therefore a decrease in the content of hydroperoxides (peroxide value) is observed. This means that low peroxide levels can result in early as well as advanced fat oxidation (Estévez et al., 2009).
The peroxide value is a used indicator to know the degree of oxidation. Previous research as well as review and arguments rather point to its effectiveness in the initial stages of oxidation processes (Shahidi and Wanasundara, 2002). From this aspect, it can be concluded that in the advanced stages of fat oxidation, the peroxide value indicator as an oxidation indicator could lead to an underestimation of the degree of oxidation (Ross and Smith, 2006). In thigh muscle with skin stored for 6 months in freezing conditions at -18 °C after the slaughter of broiler chickens, we recorded the average peroxide value in the experimental group of 1.02 µmol O2.g -1 . This value is lower than the compared control sample 1.37 µmol O2.g -1 .
The difference in peroxide value in thigh muscle with skin stored for 6 months in freezing conditions at -18 °C after the slaughter of broiler chickens was not statistically significant (p >0.05) between the control and experimental group. By statistical evaluation of the results expressed by the standard deviation, we found that the measured peroxide values of the thigh muscle with skin stored for 6 months in freezing conditions at -18 °C after the slaughter broiler chickens fluctuated more in the control group compared to the experimental group (SD = 0.68 vs. SD = 0.56).
In thigh muscle with skin stored for 9 months in freezing conditions at -18 °C after the slaughter of broiler chickens, we recorded an average peroxide value in the experimental group of 2.01 µmol O2.g -1 and a control sample of 2.47 µmol O2.g -1 . The average value of the experimental group is lower. The difference in peroxide value of thigh muscle with skin stored for 9 months in freezing conditions at -18 °C after the slaughter of broiler chickens was not statistically significant (p >0.05) between the control and experimental groups. According to the statistical evaluation expressed by the standard deviation, we found that peroxide values in the thigh muscle with skin stored for 9 months in freezing conditions at -18 °C after the slaughter of broiler chickens fluctuated more in the experimental group compared to the control sample (SD = 0.98 vs. SD = 0.93).
In thigh muscle with skin stored for 12 months in freezing conditions at -18 °C after the slaughter of broiler chickens, we recorded an average peroxide value in the experimental group of 2.96 µmol O2.g -1 and the control group of 3.48 µmol O2.g -1 . The difference in peroxide value in thigh muscle with skin stored for 12 months under freezing conditions at -18 °C after the slaughter of broiler chickens was statistically significant (p £0.05) between the control and experimental groups. By statistical evaluation of the results, which are expressed by the standard deviation, we found that the measured values of peroxide value in thigh muscle with skin stored for 12 months in freezing conditions at -18 °C after the slaughter broiler chickens fluctuated more in the control group compared to the experimental group (SD = 1.23 vs. SD = 1.09).
Based on the comparison of our results of the peroxide value of thigh muscles with the results of the experiment Klimentová and Angelovičová (2019) it can be stated, that the values differ slightly. The authors did not find a statistically significant difference in peroxide value between the experimental group using oregano essential oil and the control group. In our experiment, a statistically significant difference (p £0.01) was confirmed for 1 day of sample storage at room temperature between the control and experimental group. The second difference compared to the above study was observed, when the thigh muscle was stored in freezing conditions at -18 °C for 12 months, while our results showed (p £0.05) a statistically significant difference between the control and experimental groups. The study of these authors includes the evaluation of the peroxide number of frozen chicken thighs by the application of oregano essential oil as a feed supplement for broiler chickens. The authors recommended the use of oregano essential oil in feed mixtures in a proportion of 0.05%.
Research results of Marcinčák et al. (2008) showed that oregano essential oil was effective in slowing down the oxidation of fats compared to the control group. Another fact found by these authors was that the thigh muscle was more prone to fat oxidation compared to the breast muscle (p £0.05), which is a different opinion than reported Gardini (2000), Leonel et al. (2007). According to these authors, the different fat content in the thigh muscle and the breast muscle is not related to the oxidative processes of fat. Their conclusion is identical with the opinion of Ruban (2009), which states that muscles with a higher fat content show a greater tendency to oxidize through a continuous chain reaction of free radicals.
CONCLUSION
The evaluated results of the application of oregano essential oil to chicken thighs resulted in the following conclusion: (a) no statistically significant difference was found in dry matter content, fat content of thigh muscle with skin when stored for 1 day at room temperature, 6, 9 and 12 months under freezing conditions at -18 °C compared to a control group whose starter and growth feed mixtures contained coccidiostats, (b) no statistically significant difference was found in the acid value of thigh muscle with skin when stored for 1 day at room temperature, 6, 9 and 12 months under freezing conditions at -18 °C compared to a control group whose starter and growth feed mixtures contained coccidiostats, (c) statistically significant difference was found in the peroxide value of thigh muscle with skin when stored for 1 day at room temperature and 12 months under freezing conditions at -18 °C, and no statistically significant difference when stored for 6, 9 months under freezing conditions at -18 °C compared to a control group whose starter and growth feed mixtures contained coccidiostats.
In conclusion, we can state that maintaining the oxidative stability of chicken meat means knowing the factors that affect it and prepare the conditions for its maintenance. The rate and extent of lipid oxidation in muscle tissue depends on many factors, including broiler chicken feed, handling of meat after slaughter. Chicken meat is generally prone to oxidative degradation because it is characterized by a high concentration of polyunsaturated fatty acids. With a sufficient number of effective antioxidants using chicken meat can be a homoeostatic system that remains valid or without of oxidized compounds and reactive components.
These issues are the subject of further research in the field of oxidative stability of chicken meat. | 2021-10-15T16:22:17.633Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "053f918a38da107b1d7d0f2d524a0c2160895dcc",
"oa_license": "CCBY",
"oa_url": "https://potravinarstvo.com/journal1/index.php/potravinarstvo/article/download/1690/1977",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e638ef766d689f538ab1a2e9e7fe7bca61512474",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
234373351 | pes2o/s2orc | v3-fos-license | On the Selection of Charging Facility Locations for EV-Based Ride-Hailing Services: A Computational Case Study
The uptake of Electric Vehicles (EVs) is rapidly changing the landscape of urban mobility services. Transportation Network Companies (TNCs) have been following this trend by increasing the number of EVs in their fleets. Recently, major TNCs have explored the prospect of establishing privately owned charging facilities that will enable faster and more economic charging. Given the scale and complexity of TNC operations, such decisions need to consider both the requirements of TNCs and local planning regulations. Therefore, an optimisation approach is presented to model the placement of CSs with the objective of minimising the empty time travelled to the nearest CS for recharging as well as the installation cost. An agent based simulation model has been set in the area of Chicago to derive the recharging spots of the TNC vehicles, and in turn derive the charging demand. A mathematical formulation for the resulting optimisation problem is provided alongside a genetic algorithm that can produce solutions for large problem instances. Our results refer to a representative set of the total data for Chicago and indicate that nearly 180 CSs need to be installed to handle the demand of a TNC fleet of 3000 vehicles.
Introduction
Mobility as a Service (MaaS) platforms such as Demand Responsive Traffic (DRT), car-sharing and ride-sharing platforms have increasingly become popular around the world as remedial measures to reduce traffic congestion. These schemes have introduced major shifts in the interaction between travellers, transport infrastructure, and public transport schemes. Several initiatives that promote the use of MaaS schemes to reduce private usage already exist [1,2]. Their introduction has also coincided with the increasing uptake of Electric Vehicles (EVs) which are more environmentally friendly compared to vehicles with conventional combustion engines [3][4][5][6]. However, they suffer from decreased range compared to conventional vehicles. Although current research efforts are primarily focused on new battery technologies to increase their range or become affordable [7], they have not yet been developed enough to warrant widespread adoption. Charging speeds and battery capacities, which are matters of particular concern for fleet operators have been improving recently [8].
Even though there has been a continuous investment in new EV Charging Stations (CSs), the growth in the number of EVs is currently surpassing the growth in the number of charging points [9], particularly in the light of further demand. Noticeably, in 2018 only 5% of vehicle charging takes place at on-street CSs or at car parks or park & ride facilities, while the rest 95% takes place at work or at home [10]. Most ride-sharing fleet operators have plans to transition their fleets towards EVs in the near future, with many already operating a mix of conventional, hybrid and EVs. The average distance travelled by a ride-sharing EV is around 1000 miles (1600 km) per week [11], meaning that it would need to be recharged every one or two days. Adding up the time due to queue delays at CSs [12], the problem worsens. Therefore, to maintain the current level of convenience provided by MaaS platforms, the supporting charging infrastructure needs to expand considerably to sufficiently meet the demand.
Chicago is a city with high demand in ride-sharing. According to recent reports, Uber is going to invest over two billion dollars in the city of Chicago in the next decade to expand its business [13]. Furthermore, the Chicago City Council supports the transition to fully EVs and the implementation of CSs of both standard and rapid types [14]. It is estimated that up to 80,000 EVs will be adopted across the City of Chicago by 2030, meaning that around 2700 CSs will be required by then to cover the demand [15].
However, only 520 level-2 public CSs (approximately 3 h to charge with a charge current of 25-32A) and 82 level-3 CSs (approximately 30 min with a charge current up to 63A) have been deployed [16], leaving a considerable gap in the future charging demand. These calculations were made considering the C-rate (measurement of the charge and discharge current with respect to its nominal capacity).
Given this shortage, the search for an available charging location is likely to reduce the efficiency of an EV-based ride-sharing platform, and contribute to an increase in the number of overall empty vehicle-miles travelled, a commonly used metric of fleet efficiency and a proxy for contribution to congestion. Considering also that TNCs aim to transition to fully EVs by 2025 (e.g., [17]), it is evident that the number of CSs must significantly increase. Several TNCs have participated in deploying such charging facilities, and further planned to integrate EVs fleets operations with locating such CSs [18].
Our Contribution
This paper aims to find the locations of future CSs for TNC vehicles in an efficient way, and identify a number of stations that is sufficient to cover the charging demand. Having as a case study the area of Chicago, operational data of a taxi fleet are used to estimate the charging demand. Using an agent-based simulation model we derive the locations where the vehicles need recharging in the network. The candidate locations for the placement of facilities are extracted by a k-means clustering over the nodes of Chicago as extracted from Open Street Map [19]. The selected locations are obtained by solving (approximately) an optimisation problem using a Genetic Algorithm with the objective of minimising the incurred total cost (installation cost and cost for empty VMT).
The paper is organised as follows: the current literature is reviewed in Section 2. In Section 3 we describe our model and present the genetic algorithm to solve the problem. In Section 4 we present our results in a case study of the Chicago network. Finally, Section 5 concludes our study. Table 1 presents the abbreviations of the phrases used in this study.
Literature Review
A wide range of studies have focused on variations of the charging infrastructure planning in order to assist city planners with an efficient way for the placement of resources.
Asamer et al. [20] proposed a system for the placement of a limited number of fast charging stations dedicated to ETs. The authors first identify the regions where the stations should be placed by solving an optimisation problem with the objective of maximally satisfying the charging demand under a predefined budget. A simulation of EVs in the city of Vienna using customer origin-destination trip data from radio taxi providers is performed to derive the charging demand. The exact locations within the regions are then identified by taking into account a number of environmental constraints.
Han et al. [21] proposed an optimisation model with the objective of minimising the total infrastructure cost and waiting times. GPS trajectory data from conventional taxis and battery data from EVs from Daejon were used considering the itinerary interception approach. Jung et al. [12] also considered an itinerary interception with stochastic passenger demand and proposed a bilevel optimisation-simulation model to deal with the dynamic nature of taxi itineraries and the queue delays at the stations. The upper-level problem aims to find the optimal locations of the stations and the allocation of chargers to ETs, while the lower level problem based on a simulation in the area of Seoul that uses stochastic customer demand, aims at minimising the travel times of the passengers.
Sellmair et al. [22] present an economic analysis aiming to find the optimal number of charging stations placed at taxi stands such that a trade-off between the service of ETs (mileage and earnings) and the installation cost is achieved. They present an event-based simulation in Munich using driving patterns from combustion engine taxis and proposed a heuristic to solve an optimisation problem in order to determine the optimal number of stations.
Tu et al. [23] developed a spatial-temporal demand coverage model by extracting both spatial and temporal attributes from GPS data of taxis in Shenzen. The authors aimed at the maximisation of the ET service level i.e., the distance travelled to cover the customer demand, while simultaneously minimising the total waiting time for charging.
A multi-objective optimisation framework for the deployment of ET charging infrastructure incorporating a spatial-temporal simulation model was proposed in [24] to estimate the charging demand. The charging demand is estimated according to the customer demand and the competition between the retailers. The deployment of the stations is established by taking into consideration the impact on the passengers, the drivers and the electricity retailers.
Gan et al. [25] studied the deployment of (only) fast CSs allowing a level of stochasticity, by considering elastic charging demand in the network in both the waiting time in CS queues and the covered travelled distance.
Xie et al. [26] propose a two stage method for the siting and sizing of CSs in highway networks. In the first stage the optimal locations of the CSs are extracted by the solution of a Mixed Integer Linear Program (MILP), while in the second stage the optimal sizes of the components of each CS are determined by an optimisation model.
Yang et al. [27] present a data-driven optimisation model with the objective of minimising the infrastructure cost under the constraint of charging demand satisfaction. This is achieved after deriving a trade-off between the number of charging points and the number of waiting spaces that need to be installed in the stations. The authors consider a queuing model to describe charging congestion and estimate the waiting time based on GPS trajectory data from a taxi fleet in Changsha. Due to the non-linear mixed integer nature of the queue system, the authors solve the problem approximately by transforming the formulation into an efficient Integer Linear Program (ILP).
Gacias et al. [28] study both the problem of locating charging stations (strategic level) and the charging management of an ET fleet (operational level).
Marianov et al. [29] studied probabilistic location-allocation models of maximal coverage problems and introduced an approach to constrain the waiting time at servers. Under this probabilistic approach, a constrained waiting time ILP formulation can be introduced to approximate an optimisation problem instead of a highly non-linear one where the objective minimises the waiting time of an M/M/m queuing system with a fixed number of m chargers per CS. Specifically, given input parameters α, b, the authors require that the probability of a queue of length at most b vehicles at a CS, is above a given threshold α. They later extended the model to account for variable number of chargers per CS retaining the linearity [30].
The charging demand can be addressed as point demand where the demand is aggregated in distinct places, or flow demand where the demand is represented as a flow that passes along the routes of the travellers. Table 2 sums up various features about the related works.
Opposed to previous works, an explicit distinction between on-street and off-street EV charging stations is addressed in this paper. We hereby assume that the number of on-street CSs is limited per region due to restrictions set by local authorities in order not to limit the parking spaces for the residents. Existing CSs are assumed to be on-street and can be used by either traffic or TNC vehicles, while the newly placed infrastructure can only be private property of a TNC company and therefore can only be used by its fleet.
Methodology
Prior to formulating the problem the following assumptions are made: (1) The battery discharges uniformly; (2) The state of charge of a vehicle's battery after recharging at the pick-up location of a customer is 90%; (3) The threshold of the battery below which the vehicle needs recharging is set to 20%; (4) The travel time of a vehicle to go along a road is considered to be the average travel time over a day; (5) The EVs share of the total number of vehicles has been assumed to be 25% [32].
Agent Based Simulation
Operational data from Transportation Network Providers are given as input into an agent based simulation model in order to determine the recharging nodes of a fleet during the day. This simulation process has previously been used for the assignment and pricing of shared rides [33]. Figure 1 presents the flowchart of the processes in this study.
In the beginning of the simulation, the vehicles dispatch from various different locations within a predefined area. Upon receiving a trip request, the nearest available vehicle dispatches to customer's location, assuming that there is enough battery charge to carry out the requested trip. A vehicle with fully charged battery can travel a maximum distance of 180 km. Once the battery level drops below 20% the vehicle dispatches to the nearest available CS, if no customer is served at that moment and that location is added to the set of recharging nodes. If, however, there are customers in the vehicle, it dispatches to the nearest CS once the last serving customer is dropped-off, and that drop-off location is added to the set of recharging nodes.
Charging Demand
The charging demand is spatially distributed and is separated into traffic demand and fleet demand. The former refers to the number of vehicles not belonging to the fleet that need recharging, and the latter refers to the number of vehicles from the TNC fleet that need recharging. At each publicly available on-street CS, the demand originates from both the TNC fleet and the traffic, while on the TNC privately owned off-street CSs, the demand originates from the TNC fleet only.
Location-Allocation Problem
Let G = (V, E) be the network graph with V denoting the set of nodes and E the set of links. Let C,Ĉ denote respectively the set of candidate nodes and the nodes where CSs already exist. Let R denote the set of recharging nodes, i.e the locations where the fleet vehicles need to be recharged, and F denote the set of nodes where traffic is accumulated. The set of fleet vehicles that need recharging located at node r ∈ R is denoted by V r , ∀r ∈ R and the set of traffic vehicles that need recharging located at traffic node f ∈ F is denoted by V f . Let also V R = ∪ v∈R V v and V F = ∪ v∈F V v . Table 3 presents a nomenclature of all the symbols used in this section.
The total charging demand generated at each candidate node j ∈ C is equal to the total number of vehicles assigned to it i.e., ∑ i∈V R ∪V F x ij , where x ij is a binary variable indicating if vehicle i is assigned for recharging to CS located at j. We assume that traffic vehicles can be assigned only to on-street CSs, whereas fleet vehicles can be assigned to any CS.
The common goal of a TNC and the local authorities is the opening of CSs at those locations that will induce the minimum cost. TNCs are responsible for the installation of their privately owned CSs, placed either on-street or off-street, and are therefore responsible to cover all the induced costs. These include the land cost associated with the area j and the infrastructure cost β for the installation of each charger. Furthermore, if public CSs are used by TNC vehicles an annual park & charge cost p is incurred for every used charger. Therefore the total monetary cost that needs to be covered by the TNC is where y j is a binary variable indicating whether a CS is placed at location j, ω j is a binary variable indicating whether the CS at j is placed on-street (0) or off-street (1). The empty vehicle miles travelled to recharge by the TNC fleet constitute an indirect factor of the incurred cost. Specifically, the total time needed by the TNC vehicles to reach the nearest CS, once they need recharging is where t ij indicates the travel time to the nearest CS j ∈ C ∪Ĉ from the location of vehicle i ∈ V R ∪ V F . Summing up all the above costs, the objective function that the TNC aims to minimise is qT + M, where q is a conversion parameter from time to monetary units.
In a similar way to [34,35] we consider CSs that act as M/M/m systems. Each CS serves the set of charging demand nodes, with the arrival rate of λ j = ∑ i∈V R ∪V F x ij at node j. Assuming an exponentially distributed service rate with an average value µ j at CS j, the average waiting time at the CSs is The objective function, if we consider the waiting time as an additional cost, becomes qT + M + W. Due to non-linearity of the above function we follow the approach introduced in [30] to convert the problem to an equivalent ILP. The authors showed that the waiting time of an M/M/m queuing system can be approximated by the addition of the following set of constraints: These constraints state that each CS can have at most b vehicles in line with a probability of at least α, when a vehicle arrives for recharging. Marianov et al. [29] showed that the above probabilistic constraints can equivalently be written as: where ρ αj is the value of λ j /µ j for which the inequality holds with equality, assuming that m chargers are placed at CS j. Number of chargers at existing node j b Maximum number of vehicles in a queue α Minimum value of the probability of b vehicles in a queue λ j Daily arrival rate of CS at node j µ j Daily service rate of CS at node j
Variables Description x ij
Assignment of vehicle i to CS at node j y j Indicator of a CS at node j ω j Indicator of the type of a CS off-street or on-street z jm Indicator of at least m chargers being placed at node j ψ j Number of chargers of CS at node j (integer) The value of ρ αj can be decreased with a manipulation of x ij (by setting many of them equal to zero). Marianov et al. [30] later relaxed this assumption to consider variable number of chargers per CS. To do so an additional set of binary variables z jr must be introduced, to indicate whether at least ψ j chargers are placed at CS j. Variable z jr has a value of 1 if at least ψ j chargers are placed at j and 0 otherwise. To ensure that z jr = 1 only if z j,r−1 = 1, the following set of ordering constraints must hold: z jr ≤ z j,r−1 , ∀j ∈ C, r = 2, . . . , K where K denotes the maximum number of chargers that can be placed at a CS. Using the new variables, constraints (5), for every candidate j ∈ C and every existing CS j ∈Ĉ respectively, become: ρ αj is the value of ρ for which (6) holds with equality when m j chargers are placed at CS j. The value of ρ αj can be computed for every CS j before solving the ILP using numeric root finding techniques.
Due to the restricted number of parking spaces, we have further assumed that the area of interest is initially divided into zones and that the number of on-street CSs in each zone is bounded by a specific number N z for zone z, that is provided by the local authority. Taking into account the objective function and the constraints induced by TNC and local authorities, we formulate the Charging Facility Location (CFL) problem as an Integer Quadratic Program (IQP): where [K] = 1, . . . , K. The objective function (10) minimises the total driving time to CSs and the opening or the park & charge costs associated with CSs. Constraints (11) and (12) ensure that each traffic vehicle and each fleet vehicle respectively, that need to be recharged are each assigned to one CS. Constraint (13) forces the opening of a CS at node j, if a vehicle has been assigned to it. Constraints (14) and (15) force a candidate or an existing CS respectively at node j to have at least as many chargers as the number of vehicles assigned to it. Constraint (16) ensures the ordering of the binary variables z i.e., if CS at node j has at least r chargers installed (indicated by z jr = 1) then it must have first at least m − 1 chargers installed (indicated by z j(r−1) = 1). Constraints (17) and (18) bound the queue length of the candidate and existing CS at node j respectively. Constraint (19) upper bounds the number of on-street CSs at zone n. Constraints (20) force variables x ij , y j , ω j , z jr to take binary values while constraint (21) forces variable ψ j to take integer values. We note that before solving the optimisation problem, variables y j are set to 1 and ψ j to the number of available chargers for every existing CS j ∈Ĉ. The values of variables z jr are then set according to the value of the corresponding ψ j for every existing CS j ∈Ĉ, r ∈ [K].
NP-Hardness
We next consider a special case of the CFL problem with the objective consisting only of the time factor T for TNC vehicles and candidate locations and a subset of the constraints, namely (12), (13), (19). If we further omit ω j variables ∀j ∈ C and relax equalities (12) with inequalities we derive the following problem: ∑ j∈C∩n y j ≤ N n , ∀n ∈ H (25) which is the k-median problem known to be NP-hard [36]. As a result we get the following:
Linearisation
The above IQP can be converted into an ILP [37], by introducing new variables and a set of additional constraints. The product y j ω j , ∀j ∈ C can be linearised by introducing variable w j ∈ {0, 1} and the following constraints: Similarly we let u ij = x ij ω j , ∀j ∈ C, i ∈ F and introduce the constraints: We also note here that ψ j can equivalently be written as ∑ K m=1 z jm , ∀j ∈ C ∪Ĉ. The ILP formulation after linearisation then becomes:
Genetic Algorithm
Since the CFL problem is NP-hard, the exact solution of the ILP (33)-(38) results in a large problem even for small instances. For this reason we solve the problem approximately using a Genetic Algorithm (GA). GA is a metaheuristic method consisting of five components: generation, evaluation, selection, crossover and mutation. The algorithm begins by generating a number of feasible solutions (population) for the problem. Each solution of the GA (chromosome) is represented by an array containing the values of the variables. Every solution is evaluated with regard to the value of its objective function.
The GA proceeds in a number of iterations (generations) to derive a solution of approximately optimal cost. Once the initial solutions have been generated, a number of the solutions are selected to be passed onto the next phases, during the selection phase. The phases that follow, crossover and mutation modify the given solutions. Specifically, crossover combines two solutions by a given probability p c while mutation alters the values of some of the variables according to a given probability p m . The purpose of randomisation aims at avoiding locally optimal solutions.
The whole process of the GA is presented in Algorithm 1. The values of the variables of each chromosome are carefully set to each chromosome in the generation phase in such a way that no constraint is violated. In the evaluation phase the value of the objective function is returned if no constraint ((34)- (38)) is violated, otherwise a very large value (indicating ∞) is returned. The values of the parameters for the GA can be seen in Table 4.
Discussion
An agent-based simulation model was implemented using C# to identify the locations where the vehicles need recharging, continuing the work on assignment and pricing of shared rides [33]. To do so, a total of 62,147 one-day taxi trips retrieved from Chicago Open Data Portal [38] have been imported into the simulation platform. Only trips with distance larger than 500 m between the pick-up and drop-off locations of the customers were considered in the simulation. The Genetic Algorithm was implemented in Python and tested on a workstation with a 16-core Intel Xeon E5-2640 v3 CPU (2.6 GHz) and 128 GB RAM.
The area of Chicago, Illinois, USA was used as a case study network consisting of 28,316 nodes and 75,758 links. The links' average speeds were obtained from the OSMnx library [19]. The nodes that were considered as candidate locations for the placement of charging facilities, were obtained by a k-means clustering. The travel times between vehicles and candidate/existing CS locations were computed during pre-processing. The GA was implemented in DEAP [39] and was tested under fleet sizes ranging from 500 to 3500 vehicles. Figure 2 presents a geographical distribution of the locations of existing and the suggested to be installed CSs in the area of Chicago. The numbers in the circles indicate the maximum number of available chargers in the CSs. Blue circles correspond to the locations of existing CSs, while red ones correspond to the locations suggested for the installation of new CSs. For this study we assumed an average vehicle speed of 30 km/h throughout the whole area of Chicago and a cost per km of 0.25 $/h. With regard to the infrastructure, the installation of DC fast charger costs on average $45,000 [40,41]. Adding a maintenance cost of $1500 per year, the total cost (assuming a period of 10 years) grows to $60,000 per charger. Thus the conversion parameter of time to monetary units q is set to 0.737.
The park and charge cost is set to $9 per half hour using a DC fast charger [42]. Assuming at least 2 charges per day per vehicle the annual cost for charging in public CSs is estimated to be p = $6570. Building permits were also retrieved by [38] and the land costs were computed assuming that a typical parking space is approximately 12 m 2 and a charger needs approximately 2 m 2 of space. The maximum number of chargers that can be installed per CS has been assumed to be 16. That is because the cost of opening many CSs of one charger each is much higher than the cost of opening one CS with many chargers, and therefore achieving an economy of scale. We note however, that the number of 16 chargers per CS is not strict. We did not aim to optimise this number as this is out of the scope of this work.
The nominal voltage of the batteries' cells of the vehicle is assumed to be 360V (e.g., Nissan Leaf), the nominal capacity 40 kWh with electrolyte material of lithium-ion. The cathode material is a layer structure consisting of Lithium Nickel Cobalt Manganese Oxide, while the anode is made of graphite coated on a copper foil [43].
In Figure 3 the number of CSs that need to be opened to cover the charging demand is presented as a function of the fleet size. As can be seen from the diagram, the number of CSs that need to open rises linearly up to the size of 1000 vehicles and then it continues rising at a much smaller rate. This can be explained due to the fact that the travel time cost becomes considerable when the number of CSs is small. On the contrary, the number of CSs seems to be adequate for a larger fleet, since each CS consists up to 16 chargers (in our study). In Figure 4 the change of the various costs is presented as a function of the fleet size. As can be observed, the total cost follows a similar trend to that of Figure 3. As explained in Section 3 the total cost is the sum of four parameters, namely the infrastructure cost, the land cost, the park & charge cost and the travel time cost, where the park & charge cost is considered only for the existing CSs.
It is evident from the diagram that the land cost is negligible. It should be noted that since each CS consists of up to 16 chargers, we claim that when there is a CS of many chargers then the infrastructure cost will be lower than that of many CSs with a single charger each, resulting in an economy of scale. As a result the infrastructure cost is very small even when the fleet sizes are large. Furthermore, as the fleet size grows, the charging demand also grows resulting in an increase of the park & charge cost. This happens because there are not enough CSs opened in nearby areas, therefore the travel time cost remains considerable. On the other hand, if we take into consideration the diagram of Figure 3, it can be observed that at the same time the number of CSs also grows. However, after a fleet size of 1000 vehicles the increase rate slows down because there are then enough CSs in nearby areas. In Figures 5 and 6, the variation of the total cost and number of CSs that need to be opened, are studied under a fixed fleet size of 1000 TNC vehicles, with respect to the total traffic of EV vehicles on the streets. As can be observed, the number of CSs that need to be opened presents an increase when the traffic rate exceeds 0.3, and their number needs to increase by a rate of approximately 30%. This happens because there is an adequate number of CSs up to 0.3 traffic but for traffic larger than 0.3, the needs will be satisfied with the installation of new CSs. Furthermore, the total cost ( Figure 6) seems to follow the same behaviour. This is due to the fact that TNCs try to adapt to the increasing demand by installing new CSs. As a result the expenses are mostly related to infrastructure and land costs.
Conclusions
In this paper we made an empirical study for the placement of charging stations dedicated to TNC EVs. We proposed a mathematical formulation for this facility location problem and solved it with a Genetic Algorithm. To the best of our knowledge, this is the first paper to explicitly distinguish between on-street and off-street facilities taking into account the restrictions that are set by local authorities with respect to the maximum number of on-street facilities per region.
Experimental results indicate that approximately 20 million dollars will need to be invested for the construction of 140 CSs, that will be needed to cover the charging demand of a TNC fleet of 1000 vehicles in the city of Chicago. The main source of expenses arises from the park & charge cost induced by the charging of the TNC vehicles in existing public CSs. Considering that the number of EVs has been increasing significantly over the last years, even more CSs will have to be constructed to cover the charging demand. Furthermore, if the fleet size consists of more than 1000 vehicles, the number of CSs does not need to be continuously proportional with the fleet size, since the charging points are adequate to cover the demand. As a result a significant amount of the investment funds can be saved. As a practical recommendation we suggest that each CS consist of many chargers in order to achieve an economy of scale.
Our suggestions for future work, include an optimisation study on the number of chargers per CSs and the study of learning methods for the calculation of the charging demand. Other solution approaches to solve the optimisation problem would include the combination of the Lagrangian relaxation method and greedy heuristic that can result in a better approximation of the objective. | 2020-12-31T09:08:23.218Z | 2020-12-26T00:00:00.000 | {
"year": 2020,
"sha1": "01fc106dfe9609353ed05583375bbc34a1603376",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/13/1/168/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "38dc489388a0e2a43d7ef567d3c6df4b72ed5f62",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
196470878 | pes2o/s2orc | v3-fos-license | Monopole-antimonopole pair production by magnetic fields
Quantum electrodynamics predicts that in a strong electric field, electron-positron pairs are produced by the Schwinger process, which can be interpreted as quantum tunnelling through the Coulomb potential barrier. If magnetic monopoles exist, monopole-antimonopole pairs would be similarly produced in strong magnetic fields by the electromagnetic dual of this process. The production rate can be computed using semiclassical techniques without relying on perturbation theory, and therefore it can be done reliably in spite of the monopoles' strong coupling to the electromagnetic field. This article explains this phenomenon and discusses the bounds on monopole masses arising from the strongest magnetic fields in the Universe, which are in neutron stars known as magnetars and in heavy ion collision experiments such as lead-lead collisions carried out in November 2018 in the Large Hadron Collider at CERN. It will also discuss open theoretical questions affecting the calculation.
I. INTRODUCTION
Why do electrically charged particles exist but magneti-cally charged particles, i.e., magnetic monopoles, appa-rently do not [1,2]? If they did, Maxwell's equations would have a perfect duality symmetry between electricity and magnetism. In 1931, Dirac showed that (static) magnetic monopoles are compatible with quantum mechanics [3], provided that their magnetic charge g and the electric charges e of all electrically charged particle satisfy the Dirac quantisation condition, eg 2π ∈ Z.
This implies that both electric and magnetic charges have to be quantised, and if the elementary electric charge is the charge of the positron, then the elementary magnetic charge is the Dirac charge, Because g D ≫ 1, quantum field theory of magnetic monopoles cannot be studied using standard perturbation theory techniques. One consequence of this failure of perturbation theory is that it is not possible to compute the production cross section of magnetic monopoles in particle collisions. Collider searches of magnetic monopoles can therefore only report upper bounds on the cross section, rather than actual constraints on the theory parameters such as the monopole mass. In practice, it is customary for the experiments to quote mass bounds obtained by assuming the tree-level Drell-Yan cross section, but these bounds cannot be taken literally and serve mainly as a way of comparing the performance of different experiments. * a.rajantie@imperial.ac.uk Furthermore, for solitonic monopoles such as the 't Hooft-Polyakov monopole [4,5], there are semiclassical arguments that their production cross section in elementary particle collisions would be exponentially suppressed [6,7]. The argument does not apply, at least in the same form, to elementary magnetic monopoles, but it still raises the question of whether their production cross section may also be very different from the Drell-Yan estimates.
In this paper, I will discuss a different monopole production process, namely Schwinger pair production in strong magnetic fields [8][9][10]. Its rate can be computed using semiclassical instanton techniques, without having to assume a weak coupling. Therefore it does not suffer from the same strong-coupling issue as the perturbative calculations. It is also largely independent of the microscopic details of the monopoles, and within limits, the same results will therefore apply to both elementary and solitonic monopoles. This means that it is possible to obtain actual, largely model-indepenent lower bounds on the magnetic monopole mass.
The strongest known magnetic field in the Universe are in certain neutron stars called magnetars and in heavy ion collision experiments. I discuss the Schwinger process in both cases, and review the monopole mass bounds obtained from them.
II. ELEMENTARY AND SOLITONIC MONOPOLES
In principle, magnetic monopoles can appear in quantum field theory as either elementary or composite particles. In the former case, there is a separate quantum field associated with the monopole, whereas in the latter case, they are states made up of other fields. At weak coupling, this state is usually a semiclassical soliton solution, and therefore I will refer to these as solitonic monopoles.
The best known example of a solitonic monopole is the 't Hooft-Polyakov monopole solution [4,5] in the Georgi-Glashow model [11], an SU(2) gauge theory with an adjoint scalar field Φ. When the scalar field has a non-zero vacuum expectation value, it breaks the gauge symmetry to U(1), and therefore the low energy effective theory corresponds to electrodynamics. In the classical theory, the 't Hooft-Polyakov monopole is a smooth solution of the field equations, of the form and it has the magnetic charge g = 4π/e = 2g D , where e is the SU(2) gauge coupling and also corresponds to the elementary electric charge. The monopole mass is given by the energy of the solution, and is approximately where m is the mass of the massive gauge bosons. The monopole also has a finite size, The same monopole exists also in the quantum theory as a non-perturbative state, and at weak coupling it is well approximated by the classical solution. Quantum corrections to it can be calculated perturbatively [12], because the coupling constant of the theory is e ≪ 1, not g = 4π/e ≫ 1. One can also go beyond perturbation theory by using lattice Monte Carlo methods [13,14].
The 't Hooft-Polyakov solution can be found in any Grand Unified Theory (GUT) [15], and therefore the existence of magnetic monopoles is an unavoidable consequence of grand unification. The mass of these GUT monopoles (4) would typically be around 10 16 GeV, which is well above the energies of any foreseeable particle collider experiments. However, there are theories that have lighter solitonic monopole solutions [16,17], possibly even within the reach of the Large Hadron Collider.
Although most theoretical work has focused on solitonic monopoles, it is also important to consider the case of elementary monopoles. In practice, theoretical calculations with them are difficult, not only because of their strong charge, which makes perturbation theory invalid, but also because the existing quantum theory formulations are cumbersome [18][19][20]. However, that is not an argument against their existence. If the magnetic monopole is an elementary particle, its mass is a free parameter. It is therefore perfectly possible that it is at the TeV scale or even lower, as long as it is compatible with the current experimental bounds.
Because of quantum effects, even elementary monopoles would have to have a non-zero effective size [21,22], where M is the monopole mass. This is significantly larger than their Compton radius, and therefore even elementary monopoles would not actually appear fully pointlike. However, because of the calculational difficulties due to the strong magnetic charge, there is no detailed understanding of how this finite size arises. It is, nevertheless, interesting to note that it agrees with the size (5) of the 't Hooft-Polyakov monopole. Indeed, the distinction between elementary and solitonic monopoles is not necessarily clear-cut, and there are examples of theories which have two dual descriptions, in one of which the monopoles are solitonic and in the other one elementary [23].
III. PRODUCTION CROSS SECTION
There have been several searches for magnetic monopoles in particle colliders [24], including LEP, Tevatron and the LHC. The more recent results are from the ATLAS and MoEDAL experiments at the LHC [25,26]. Because there has been no positive discovery, these experiments place upper bounds on the monopole production cross section. In order to constrain actual theory parameters such as the monopole mass, one would need a reliable theoretical prediction for the production cross section. Sadly such a prediction does not exist for monopole production in collisions of elementary particles.
The main obstacle to the calculation of the production cross section of magnetic monopoles is their strong magnetic charge g = ng D , n ∈ Z, which the Dirac quantisation condition (1) requires to be much greater than one. This means that the calculation cannot be carried out using perturbation theory. There are also strong arguments that the production cross section of solitonic monopoles, such as GUT monopoles, is suppressed by an exponential factor exp(−4/α) ∼ 10 −238 [6,7]. This is because they are highly ordered coherent lumps of field consisting of O(1/α) quanta. Even if there is enough energy available to produce the monopoles, it is much more likely that the same energy gets distributed to a large number of particles in a less ordered fashion. It is not known if the production cross section of elementary monopoles is suppressed by a similar factor, because we currently do not have the tools to carry out the calculation.
Because of this theoretical uncertainty, experiments tend to quote nominal mass bounds based on the treelevel Drell Yan cross section. For monopoles with a single Dirac charge, g = g D , this gives [25] M 1850 GeV or M 2370 GeV, depending on whether they have spin 0 or 1/2, respectively. However, these nominal bounds are mainly useful for comparing different experiments and should not be interpreted as actual lower bounds on monopole masses. Much lighter monopoles can exist if their production cross sections is low. In order to obtain an actual mass bound, one therefore has to consider other processes than monopole pair production from elementary particle collisions.
IV. SCHWINGER PAIR PRODUCTION
In addition to elementary particle collisions, magnetic monopoles can also be produced in a strong external magnetic field by the Schwinger process.
It was shown by Sauter [27] and Schwinger [28] that electrically charged particles are pair produced in a sufficiently strong electric field. This can be understood as tunneling through the Coulomb potential barrier. The rate of this process can be calculated using semiclassical instanton techniques even at strong coupling [8,9].
If state |Ω is unstable, then its decay probability is given by where the S-matrixŜ is the time evolution operator from past infinity to future infinity in Minkowski space andŜ E the corresponding operator in Euclidean space. In the semiclassical approximation, the exponent is given by where S inst is the action of the instanton solution, which is a classical solution of the Euclidean equations of motion with one negative mode, i.e. a saddle point solution.
One negative mode is needed so that the solution gives an imaginary contribution to the path integral.
For an electrically charged point particle with mass m and electric charge e in a background gauge field A ext µ , the Euclidean action is where τ ∈ [0, 1) is a parameter along the worldline and A ext µ is the background gauge field. The last term corresponds to the self-interactions of the particle.
In a constant background electric field E, rotation symmetry implies that the solution is a circle in the plane defined by the field E and the time direction. Denoting the radius of the circle by r, its action is where the unstable direction corresponds to change of r. Therefore the saddle point corresponds to the radius which maximises the action, and gives The solution has zero modes corresponding to translations in the four Euclidean directions, which contribute a spacetime volume factor, which means that there is a non-zero, finite rate per unit spacetime volume [9] where the prefactor D is given by a functional determinant of the second derivatives of the action. The result (13) means that if the field is sufficiently strong, where m e is the electron mass, electron-positron pairs are produced at an unsuppressed rate. This field is a few orders of magnitude stronger than what can be currently reached with the most powerful lasers, and therefore Schwinger pair production has not yet been confirmed directly in experiments [29,30].
By the electromagnetic duality, if magnetic monopoles exist, they would be pair produced by the same mechanism in sufficiently strong external magnetic field. The rate can be obtained from Eq. (13) by replacing e → g and E → B [8,9], Because g ≫ 1, the second term in the exponent is important, and therefore the field strength needed to produce monopoles of mass M is Correspondingly, if monopole production is not observed in field B, it implies a lower mass bound This calculation assumes that the monopoles are pointlike. The radius of the instanton is given by the electromagnetic dual of Eq. (11), so this assumption is justified if the monopole size (6) is less than this, R QM ≪ r inst . It is easy to check that this is true if the monopole mass satisfies Eq. (17). As a simple application, one can consider Schwinger pair production of monopoles by the LHC magnets, which have field strength | B| ≈ 8.3 T ≈ 1.6×10 −15 GeV 2 . Even before any particle collisions were carried out, the fact that this field did not produce magnetic monopoles when the magnets were first switched on, implies the lower mass bound for the mass of monopoles.
V. NEUTRON STARS
To improve the bound (19), one needs to find stronger magnetic fields. The strongest known magnetic fields currently existing in the Universe and in neutron stars known as magnetars, 23 of which have been found [31]. The one with the strongest field is SGR 1806-20, with | B| ≈ 2 × 10 11 T ≈ 4 × 10 −5 GeV 2 . Its temperature is low in comparison, T ≈ 0.55 keV, and therefore the Schwinger pair production rate is given by the zerotemperature expression (15).
At the surface of SGR 1806-20, the ratio of the gravitational and magnetic forces acting on a monopole is where R NS ∼ 10 km ≈ 5 × 10 19 GeV −1 is the radius of the magnetar and M NS ∼ 1.5M ⊙ ≈ 1.6 × 10 57 GeV is its mass. If a pair of magnetic monopoles with mass M ≪ 1.8 × 10 17 GeV are produced near the surface of a magnetar, the magnetic field would therefore pull one of them to the surface of the star and expel the other one into space. This would reduce the strength of the magnetic field, in contradiction with observations [10]. Using Eq. (17), one therefore obtains a bound M 0.17 g g D
A more detailed calculation, which takes into account the long time scale over which the field has to survive and grow, gives a somewhat stronger bound M 0.31 GeV for g = g D [10].
The bound (21) was obtained by considering the magnetic field outside the magnetar, but the magnetic fields in the interior are even stronger. There are also many other open questions about magnetars, and with a better understanding of them, one may well be able to improve the bound further.
VI. SPS HEAVY ION COLLISIONS
Even stronger magnetic fields than those around magnetars are present in relativistic heavy ion collisions. The heavy ions are nuclei of heavy elements such as Au or Pb, and therefore they have a high electric charge Q = Ze ∼ 100e. In these experiments, these nuclei are collided at relativistic speeds. When the collision is not head-on, one therefore has two very high electric currents moving in opposite directions past each other at the time of the collision. This induces a very strong magnetic field for a short period of time.
The most recent published monopole search in heavy ion collisions was carried out at CERN Super Proton Synchrotron (SPS) in 1997 [32]. It was a fixed-target Pb collision with beam energy 160A GeV, corresponding to a centre-of-mass energy √ s N N ≈ 17 GeV per nucleon. This produces a magnetic field | B| ≈ 0.01 GeV 2 [33] and a fireball with a high temperature T ≈ 0.185 GeV [34].
The Schwinger process at non-zero temperature was studied in Ref. [35,36]. To calculate the rate in the semiclassical approximation, one needs to find the instanton in Euclidean space with a compact imaginary time direction of length β = 1/T . This breaks the Euclidean rotation symmetry and therefore the instanton is no longer a circle. Because of the strong coupling, the solutions have to be found numerically, and this was done in Ref. [35]. The result for the SPS case is, however, simple. Because the temperature is sufficiently high, the relevant instanton is a time-dependent sphaleron solution, which corresponds to a static monopole-antimonopole pair.
The energy of a static monopole-antimonopole pair separated by the distance r in a constant magnetic field B is The sphaleron configuration corresponds to the distance which maximises the energy, which gives the sphaleron energy Including the prefactor, the pair production rate is [36] Γ ≈ M 5 T 9 64π 7 gB 3 and the predicted monopole pair production cross section is where σ tot ≈ 6.3 b is the total inelastic cross section, and V is the spacetime volume of the collision. The failure of SPS heavy ion collisions to produce magnetic monopoles implies an upper bound σ MM 1.9 nb on the monopole pair production cross section [32]. This translates to a bound on the monopole mass [10],
VII. LHC HEAVY ION COLLISIONS
The magnetic field produced by a heavy ion collision increases with the collision energy, and therefore the Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC) should be able to provide much stronger bounds on monopole pair production and, therefore, on the monopole mass. There is no published data on monopole searches at RHIC, so I will focus on the LHC, which carried out a month-long heavy ion run in November 2018, with collision energy per nucleon √ s N N = 5.02 TeV. The evolution of the electromagnetic fields during the collision was studied in Ref. [37]. The field strength grows linearly with collision energy, and is highest for collisions with impact parameter b ≈ 13 fm, which corresponds to the diameter of the nucleus. Therefore we are mainly interested in peripheral collisions, and can ignore thermal effects.
For the LHC energy, the peak magnetic field strength is B ≈ 7.3 GeV 2 , reached at the time of collision. The field is highly time-dependent, and can be approximately described by the analytic fit [38] whereŷ is the direction of the impact parameter, and ω ≈ 73 GeV parameterises the rate of the field evolution in time. The time-dependence is important if the the ratio of this to the inverse radius (18)) of the constantfield instanton, is large. Therefore we see that time dependence cannot be ignored for monopoles heavier than a few GeV. In Ref. [38], the pair production rate was computed for the time-dependent field (28), by analytically continuing it to Euclidean time τ , and then finding the instanton solution in this Euclidean background field. In the zeroth-order approximation, in which the monopole worldline self-interactions are ignored completely, the instanton solution can be found analytically [38][39][40] to be an ellipse with semi-major axis a τ = M/gB 1 + ξ 2 in the Euclidean time direction and semiminor axis a y = M/gB(1 + ξ 2 ) in the y direction. Its action is [38] where E and K are elliptic integrals, and the last expression is valid at ξ ≫ 1, which corresponds to M ≫ 1 GeV. At the zeroth order, the prefactor in the rate Γ can also be computed analytically [39,40]. Importantly, the result (31) shows that the rate decreases when ξ increases. One can also go beyond the zeroth-order approximation analytically, and compute the next-to-leading order self-interaction correction to the instanton action [38], Comparing Eq. (31) and (32), we see that the total next-to-leading order action S (0) which is always satisfied for LHC collisions, irrespective of the monopole mass M . At the face value this would imply that the pair production is unsuppressed, but in practice it is more likely to be a signal that the approximations have broken down.
It is possible to find the instanton to all orders in selfinteractions by solving the full non-linear equations numerically [38]. Because the equations are non-local, this is a computationally heavy task, and therefore it has currently only been done for relatively small ξ, where the NLO approximation should be valid, and indeed, the results agree with it.
Again, this calculation is based on the assumption of a pointlike monopole. For it to be valid, the monopole size (6) needs to be smaller than the semi-minor axis a y of the instanton, which gives the condition ξ 2 8πM 2 /g 3 B. This coincides approximately with the value where the NLO action in the pointlike approximation becomes negative, and is not satisfied for any monopole masses in LHC heavy ion collisions. For an accurate and reliable estimate of the production rate, it will therefore be necessary to include the non-zero monopole size. For 't Hooft-Polyakov monopoles this can be done, at least in principle, by finding the instanton solution in the full field theory.
As a rough estimate of the production cross section, one can use the locally constant field approximation, in which the constant-field rate (15) is integrated over the spacetime volume were the fields are strongest. Within this approximation, if the LHC searches do not find monopoles, Eq. (17) would give a very rough bound Because the effect of the time-dependence appears to enhance the production, this can be expected to be a conservative estimate. A complete calculation may therefore make the bound significantly stronger.
VIII. CONCLUSION
The electromagnetic dual of Schwinger pair production provides a new way of searching for magnetic monopoles. If monopoles exist, they would be produced in a sufficiently strong magnetic field. Conversely, if this does not happen, monopoles either do not exist or their mass is too high. By considering physical instances of strong magnetic fields, we can therefore derive lower bounds on magnetic monopole masses. This requires a theoretical calculation of the pair production rate, which can be carried out using the semiclassical instanton method, which does not require perturbation theory or the assumption of a weak coupling. Therefore it can be applied to magnetic monopoles, whose coupling to the electromagnetic field is necessarily strong.
In a constant field at zero temperature, the rate can be computed analytically and the result is independent of the microscopic details of the monopoles and whether they are elementary and solitonic. The resulting mass bounds are therefore universal. In a time-dependent field, the full result requires a numerical calculation, and the finite monopole size needs to be taken into account. Therefore the precise result will also depend on the microscopic nature of the monopoles. For solitonic 't Hooft-Polyakov monopoles, the numerical calculation is straightforward in principle, although computationally very demanding.
Using this approach, the magnetic fields of magnetars imply a bound (21) of slightly below one GeV, and heavy ion collisions at SPS an order of magnitude stronger (27). The LHC should be able to improve significantly on these. The results of the one-month heavyion run in November 2018 have not yet been published, but the estimate (33) suggests that if monopoles with mass M 70 GeV exist, they would have been produced then. If data shows no monopoles, it will therefore imply a lower mass bound of the same order. However, obtaining the precise mass bound will need further theoretical work in order to take the time dependence of the collision and the finite monopole size into account.
At the face value, this appears to be 20−30 times lower than the current bounds from ATLAS and MoEDAL. However, it is important to remember that all existing mass bounds from proton-proton collisions are based on perturbation theory, which is not actually valid for magnetic monopoles because of their strong magnetic charge. Therefore one cannot currently rule out the existence of monopoles with masses of even a few GeV, and the only way to do that is to carry out these calculations and experiments. | 2019-07-15T12:40:04.000Z | 2019-07-12T00:00:00.000 | {
"year": 2019,
"sha1": "40599aedb7adb4845733501387a3ce88780a2603",
"oa_license": null,
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6863478",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "40599aedb7adb4845733501387a3ce88780a2603",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
73581672 | pes2o/s2orc | v3-fos-license | Effect of Processing on the Oxalate and Calcium Concentrations of Two Local Dishes
Stems of sweet taro (Colocasia esculenta) grown in Thua Thien Huế Province in Vietnam and were used as an ingredient to prepare two local dishes, Cơm Hến and Canh Chua Bạc Hà. This study investigated the effect of simple processing treatments used to prepare these popular dishes on the total, soluble and insoluble oxalate and calcium contents of the taro stems. Raw stems were used to prepare Cơm Hến. Three treatments, removing the skin then washing and slicing, slicing and washing, or slicing and then allowing the stems to wilt overnight were compared to the whole raw stems with the skin retained. Overall, processing the stems reduced the soluble oxalate contents by a mean of 8% when compared with the original raw stems. The mean total calcium bound in the insoluble oxalate fraction of the three processing treatments was 43.3% ± 2.0%. Canh Chua Bạc Hà was prepared by boiling peeled taro stems. In this experiment the peeled stems were boiled for 10, 15 and 20 min and this resulted in 63.4%, 74.5% and 76.6% reductions in soluble oxalate content, respectively, when compared to the original peeled stems. Boiling for 20 min was the most effective way to reduce both the total and soluble oxalate contents of the stems. 39% of the total calcium in the raw taro stems was bound to the insoluble oxalate fraction and this was reduced to a mean of 17.2% ± 2.6% by the three cooking treatments.
Introduction
Sweet taro (Colocasia esculenta) is a major tropical crop that is widely grown in Vietnam.It is cultivated both in wet land, sandy soil, paddy fields and gardens [1].Many different cultivars grow very well in the coastal, nutrient-poor sandy soils in rural Central Vietnam.It is a popular plant and the tubers, stems (petioles) and leaves have a range of uses.Taro tubers are used as an energy-rich food, while taro stems are processed as a vegetable.Leaves and stems are also used as feed for livestock, especially for pigs, as the crude protein content of leaves ranges from 4.7% to 6.2% wet matter (WM); for the stems it ranges from 0.5% to 0.6% WM [2].
The mineral content of commonly-consumed Vietnamese vegetable plants has not been widely researched; in particular, the calcium content of the leaves and stems of taro has not been determined.This is important, as the calcium intakes of children in Vietnam are known to be more than 50% below recommendations [3] and many recommendations suggest an increased consumption of milk and milk products [4].An understanding of the calcium content of commonly-consumed vegetables, such as taro stems, could assist with this problem.It is important to determine the total calcium contents of these vegetable foods and to determine whether the calcium in the tissues will be absorbed when consumed, as it is known that compounds, such as oxalates, can interfere with the absorption of some minerals.
Taro stems are the most liked part of Mon Ngot (sweet taro) as they are eaten raw as the main ingredient of Cơm Hến, or cooked to prepare Canh Chua Bạc Hà.Approximately one tonne of taro stems is processed and consumed every day by local people and tourists in the 300 small restaurants in Huế city, as well as being sold by street vendors.Cơm Hến is the most well-known dish in Huế city.Canh Chua Bạc Hà is another well-known dish where cooked taro stems are mixed with fish, tomatoes and pineapples.However, earlier studies have shown that all parts of the taro plant contain high levels of soluble and insoluble oxalates; important anti-nutritive compounds [5]- [10].Recent studies of four common cultivars of C. esculenta So Trang, Tron, Tia and Chum, contained a mean of total oxalates of 635.2 ± 92.4 mg/100g WM and 227.9 ± 43.6 mg/100g, WM, respectively, of soluble oxalates in freshly harvested taro leaves [11].In an earlier study, Hang et al. [12] showed that the petioles of common cultivars of taro grown in central Vietnam contained significantly higher amounts (P < 0.05) of total and soluble oxalates when compared to the levels in the leaves.Hang et al. [10] were able to show that simple processing techniques, such as wilting, soaking and washing in water, were effective in reducing the soluble oxalate contents of locally-grown leaves and stems of one cultivar of taro (Alocasia odora).Cooking stems and a mixture of stems and leaves for between 10 and 60 min was an effective way to reduce the soluble oxalate content.The total oxalate in stems ranged from 180 -331 mg/100g WM [9].These studies showed that total oxalates can be reduced by ensiling (36.8%), boiling (48.4%), soaking (23.5% to 69.5%), wilting (5.9% to 14.2%) and washing (9.2%) [9] [10].
The regular consumption of foods containing high concentrations of soluble oxalates is of concern because of the harmful effects they cause when absorbed into the body.High soluble oxalate diets are widely known to cause an excessive urinary excretion of oxalate (hyperoxaluria), which causes an increased risk of developing calcium oxalate-containing kidney stones.About 75% of all kidney stones are composed mainly of calcium oxalate [13].Therefore, people predisposed to forming kidney stones are recommended to minimise their intake of foods high in oxalates [14].It has been reported that the greater part of the oxalic acid in plants is present in the form of soluble oxalates [15] in combination with Na + , K + or 4 NH + [16].In addition, soluble oxalates in the diet can decrease the bioavailability of various cations, including calcium [17], magnesium and iron by forming insoluble oxalates.
Since sweet taro grows readily in many different environments in Central Vietnam and is a very popular base for many local dishes, it is important to investigate the most efficient ways to reduce the oxalate content of taro petioles when used raw to make Cơm Hến, or cooked in Canh Chua Bạc Hà, and to determine the effect of oxalates on the available calcium content of the raw and processed stems.
Materials and Methods
Thirty kg of stems of Mon Ngot (sweet taro) were harvested in early morning in September 2016 from Huong Chu village, in the Huong Tra District of Thua Thien Huế Province, Vietnam.Ten kg each of fresh petioles were used to prepare either Cơm Hến or Canh Chua Bạc Hà.
Preparation of Cơm Hến
The outer skin was removed from the stems, which were then processed using three different methods: washing and then slicing into 30 mm length pieces; thinly slicing and then washing; thinly slicing and allowing the petioles to wilt overnight.
Preparation of Canh Chua Bạc Hà
The stems were cooked using three different cooking times.The taro stems were washed, peeled to remove the outer skin, and then cut into 30 mm lengths.
Three different batches of 500 g of processed taro were then placed in 2 L of boiling tap water and cooked for 10, 15 or 20 min.The cooked samples were then allowed to cool at room temperature, 26˚C ± 1˚C, and then the excess fluid was allowed to drain off for five min.
Sample Preparation
Three representative 300 g samples of material from each of the processing methods were dried in an oven set at 65˚C, ground to a fine powder using a Sunbeam multi grinder (Model no.EMO 400 Sunbeam Corporation Limited, NSW, Australia), then sealed in plastic bags until analysis could commence.The residual moisture was determined in triplicate [18] by drying to a constant weight in a 105˚C oven for 24 h.
Oxalate Determination
The total and soluble oxalate contents of the individual finely ground samples (~0.5 g) were determined using the method outlined by Savage et al. [19].Each sample was extracted to measure the total oxalate content and three replicates were extracted to measure the soluble oxalate contents.Forty mL of 0. The oxalic acid peak was identified by comparing the retention time with a standard solution and by spiking an already-filtered sample containing a known amount of oxalic acid standard.The insoluble oxalate content was calculated by the difference between the total and soluble oxalate contents [20].The final oxalate values of all samples were converted to mg/100g WM of the original material, taking into account the moisture content of each sample.
Calcium Determination
Total calcium content was analysed using an atomic absorption spectrometer (AOAC, method 945.46) [21].Calibration of the instrument was performed using commercial standards following AOAC method 991.25 [21].The calcium bound in insoluble oxalate was calculated, assuming that insoluble oxalate was predominantly calcium oxalate and that calcium comprised 31.28% of this molecule.
Statistical Analysis
All analyses were carried out in triplicate and the results are presented as mean values ± standard error.Statistical analysis was performed using one-way analysis of variance (Minitab version 16, Minitab Ltd., Brandon Court, Progress way, Coventry, UK).
Results
Taro stems were often eaten raw and sliced in Cơm Hến and then mixed with other ingredients, but the pre-treatments given to the petioles were important in the preparation of this dish.Removal of the outer skin, slicing and washing the stems reduced the mean total and soluble oxalate content oxalate content by 7% and 9%, respectively (Table 1).Compared to the initial raw stems, removing the outer skin, washing, slicing and allowing the slices to wilt was not effective at reducing the total oxalate of the stems; however, the soluble oxalate was reduced by 6.3% when compared to the original whole stems with the skin on (Table 1).The total calcium content of the raw and processed stems were very similar (mean 87.5 ± 1.9 mg/100g WM) except that the stems that had been allowed to wilt appeared to contain a higher calcium content (116.9 mg/100g WM); most of this effect was caused by a loss of moisture during wilting.Overall, a mean 43.3% ± 2.0% of the total calcium content of the raw and processed stems was bound to the insoluble oxalate fraction and would be unavailable for absorption when consumed.Canh Chua Bạc Hà is a popular dish in Vietnam where the stems are boiled with the other ingredients.To prepare this dish, the outer skin of the stem was removed and then slices were cut from the stems and placed in boiling water.Table 2 shows that there were marked reductions in the total, soluble and inso- luble oxalate contents of the sliced stems following removal of the outer skin and subsequent boiling.The most effective reduction of soluble oxalate occurred after 20 min boiling (79.6% reduction), while boiling the sliced stems for 10 min gave a 63.4% reduction in soluble oxalate content.The overall mean total calcium content of the raw and cooked stems was 93.6 ± 4.9 mg/100g WM and this was not significantly changed by cooking.
Overall, 39.0% of the total calcium content of the raw stems was bound to the oxalate, making it insoluble.However, the three cooking treatments significantly reduced (P < 0.05) the insoluble oxalate content of the cooked stems, resulting in a significant reduction (P < 0.05) in the amount of bound calcium in the stems of the three cooking treatments (mean 17.2% ± 2.6%).Overall, there was a mean reduction of 56.2% in the amount of bound calcium in the cooked stems when compared to the bound calcium in the raw stems.
Taro consumption is affected by the presence of an acridity factor, which causes a sharp irritation and burning sensation in the throat and mouth on ingestion [22].Presumably, the irritation arose when calcium oxalate crystals were released and inflicted minute punctures in the mouth and throat.The acridity factor can be reduced by soaking and fermentation during processing [23] [24].
The results in Table 1 show that simple treatments, such as removing the outer skin, washing and slicing the taro stems before preparing the Cơm Hến dish, were not very effective at reducing the oxalate levels in the final products.
The total oxalate content was significantly reduced (P < 0.05) compared with the initial raw stems; however, wilting the stems had no effect on the total oxalate content.These physical treatments had no effect on the insoluble contents of the stems.It can be seen that removing the skin, slicing and then washing the petioles, reduced the soluble oxalate content more than when the stems were washed and then sliced into thin 30 mm slices.Washing the sliced stems allowed more oxalate to leach out from the tissue.But the value observed was not significantly different from the original whole raw stems.No reduction in soluble oxalate content was observed when the sliced stems were allowed to wilt overnight.The most interesting feature of this study was that small reductions in the percentage of bound calcium could be observed when the stems were wilted and also when the stems were washed after being sliced.
Boiling was the most effective way to reduce the soluble oxalate content of the taro stems (Table 2).Traditional cooking of chopped taro stems was normally carried out for 10 to 15 min.In this experiment, the stems were cooked for 20 min to investigate whether an additional cooking time would further reduce the soluble oxalate content.The soluble oxalate content was, indeed, reduced when the sliced stems were boiled for an increased time.In fact, a 63.4% reduction in soluble oxalate occurred after boiling for 10 min compared to the original raw stems and reductions of 74.5% and 79.6% in the soluble oxalate content occurred after boiling for 15 and 20 min, respectively.These results were similar to values reported by Iwuoha and Kalu [25], where 82.1% and 61.9% oxalate reductions in cocoyam flour occurred when the flour was boiled and roasted, respectively.
Hang et al. [10] reported similar findings when stems of Mon Cham were boiled for up to 60 min, and this reduced the soluble oxalate content by 95.4% and 73% for the stems and leaves of Chia Voi, respectively.However, cooking the taro leaves and stems for 10 min led to a mean 62.1% reduction in soluble oxalate contents.
The total calcium contents of taro stems have not often been recorded but the levels of total calcium were very similar (mean 94.8 ± 7.5 mg calcium/100g WM) for the four physical treatments (Table 1); cooking had no effect on the total calcium content (mean 96.7 ± 5.8 mg calcium/100g WM).It was interesting to note the mean % calcium bound/total calcium was 43.3 ± 2.0 (Table 1).This markedly reduced to a mean of 17.1 ± 2.6 when the stems were cooked (Table 2).These values were relatively high when compared to the values reported by Oscarsson and Savage [7] for the proportion of insoluble calcium bound to total calcium in young growing taro leaves (10%).Overall, the reduction in total calcium bound to the insoluble oxalate fraction in the taro stems following cooking made a very positive improvement in the nutritive value of this interesting food product.
Conclusion
This experiment showed that simple processes, such as peeling and wilting, were not effective ways to reduce the soluble oxalate of raw taro stems.Cooking not only reduced the soluble oxalate content but also reduced the proportion of calcium bound to insoluble oxalate in the stems and was an effective way to minimize the risks of oxalate consumption from taro stems.It was possible that small adjustments to the preparation techniques would allow further reductions in the soluble oxalate content to be established and this would improve the long term nutritional value of these dishes.
2 M HCl (Aristar, BDH Chemicals, Ltd., Poole, Dorset, UK) were added to the flasks for the total oxalate extraction and 40 mL of high purity water were added for the extraction of soluble oxalates.All flasks were placed in an 80˚C shaking water bath for 20 min.The solutions were allowed to cool to 20˚C and then made up to 100 mL with 0.2 M HCl for total oxalate, and high purity water, for soluble oxalate, respectively.The extracts in the volumetric flasks were filtered through a cellulose acetate syringe filter with a pore size of 0.45 µm (dismic-25cs, Advantec, California, USA) into 1 mL glass high performance liquid chromatography (HPLC) vials.The samples were analysed with a HPLC system, using a 300 mm × 7.8 mm Rezex ion exclusion column (Phenomenex Inc., Torrance, CA, USA) attached to a Cation-H guard column (Bio-Rad, Richmond, CA, USA) held at 25˚C.The analysis was performed by injecting 20 µL of each sample or standard onto the column using an aqueous solution of 25 mmoL sulphuric acid (HPLC grade Baker Chemicals, Phillipsburg NJ, USA) as the mobile phase, then pumped isocratically at 0.6 mL/min, with peaks detected at 210 nm.The HPLC equipment consisted of a Shimadzu LC-10AD pump, CTO-10A column oven, SPD-10Avp UV-Vis detector (Shimadzu, Kyoto, Japan) and a Waters 717 plus auto-sampler (Waters, Milford MA, USA).Data acquisition and processing were undertaken using the Peak Simple Chromatography Data System (Model 203) and Peak Simple software version 4.37 (SRI Instruments, Torrance CA, USA).
Table 1 .
Dry matter, oxalate, and calcium content of raw processed taro stems (mg/100g WM) used to prepare Cơm Hến (values in brackets are % of soluble oxalate in the total oxalate).
Table 2 .
Dry matter, oxalate and calcium content of raw and cooked taro stems (mg/100g WM) used to prepare Canh Chua Bạc Hà. (values in brackets are % of soluble oxalate in the total oxalate content).
Means with different letters within each column differ (P < 0.05).
have shown that the stems of nine different taro cultivars grown in Vietnam in an earlier season, and under different growing conditions, ranged from 132 to 244; from 8.5 to 163 and from 44.6 to 217 mg/100g WM of total, soluble and insoluble oxalates, respectively.In this earlier study, Mon Ngot stems had values for total, soluble and insoluble oxalate of 192, 109 and 83 mg/ 100g WM, respectively, which were much lower than the values reported in this study (Table | 2018-12-26T07:35:15.132Z | 2017-06-07T00:00:00.000 | {
"year": 2017,
"sha1": "4f697891f5a4357f5da5147816c91f2d4a33c58b",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=77161",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "4f697891f5a4357f5da5147816c91f2d4a33c58b",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
199517860 | pes2o/s2orc | v3-fos-license | Evaluation of dietary supplementation of a novel microbial muramidase on gastrointestinal functionality and growth performance in broiler chickens
This study was conducted to assess the effect of dietary supplementation of Muramidase 007 to broiler chickens on gastrointestinal functionality, evaluating growth performance, apparent ileal digestibility, intestinal histomorphology, vitamin A in plasma and cecal microbiota. A total of 480 one-day male chicks (Ross 308) were distributed in 16 pens allocated in 2 experimental diets: the control diet (CTR) without feed enzymes, coccidiostat or growth promoters, and the experimental diet (MUR): CTR supplemented with 35,000 units (LSU(F))/kg of the Muramidase 007. Digesta and tissue samples were obtained on days 9 and 36 of the study. A lower feed conversion ratio was observed in the MUR treatment. Apparent ileal digestibility of DM, organic matter and energy were improved by Muramidase 007. It was also observed that MUR improved digestibility of total fatty acids, mono-unsaturated fatty acids and poly-unsaturated fatty acids, and content of vitamin A in plasma at day 9 (P < 0.05). Histomorphological analysis of jejunum samples revealed no differences in the villus height or crypt depth; but a higher number of goblet cells and intraepithelial lymphocytes at day 36 with MUR. No differences were observed in plate counts of enterobacteria or Lactobacillus along the gastrointestinal tract, neither on the cecal short-chain fatty acids. An statistical trend was observed for reduction of cecal clostridia at day 9 for MUR. Analysis of cecal microbiota structure by 16S rRNA gene sequencing revealed relevant changes correlated to age. At day 9, broilers receiving MUR showed decreased alpha diversity compared to CTR that was not detected at day 36. Changes in specific taxonomic groups with an increase in Lactobacillus genus were identified. In conclusion, evaluation of the variables in this study indicates that dietary Muramidase 007 contributes to improve feed conversation ratio and gastrointestinal function in broiler chickens. Effects could have been mediated by slight shifts observed in the intestinal microbiota. More studies are guaranteed to fully understand the mechanisms involved.
INTRODUCTION
A fast growth rate and high feed efficiency are the 2 main targets in modern poultry production. For optimum performance of birds, important factors affecting gastrointestinal health like genetic potential of the birds, quality of the diets, environmental conditions, and disease outbreaks, should be taken into consideration (Yegani and Korver, 2008). Actually, in the modern poultry industry, multi-environment selection is employed by the breeding companies (Neeteson, 2010; While probiotics, essential oils, organic acids, or prebiotics are used to help stabilizing the gut microbiota, little attention has been paid so far on the massive turnover and dead of the bacterial populations resulting in high amounts of bacterial cell wall fragments, containing peptidoglycans (PGN), that take place in the GI tract (Johnson et al., 2013). PGNs are the mayor structural components of the cell wall, uniquely found in bacteria, and considered as conserved products of bacterial metabolism and activity modulators in the GI tract (Humann and Lenz, 2009).
Muramidase (EC 3.2.1.17), also known as lysozyme or N-acetylmuramidase, belongs to the family of glycosyl hydrolytic enzymes. Muramidase cleaves the β-1,4glycosidic linkages between N-acetylmuramic acid and N-acetyl glucosamine, the basic elements in the carbohydrate backbone of PGN from bacterial cell walls. The most studied muramidase is the one abundantly found in hen egg white, however, many muramidases are naturally found in a great variety of animal secretions, plants, or micro-organisms. Muramidases are ubiquitous and occur naturally in the GI tract of animals (Sahoo et al., 2012).
Studies with nursery pig using diets supplemented with muramidase (isolated from hen egg white) resulted in ameliorated growth performance probably due to improvements on histomorphology of small intestine Wells, 2013, 2015). Contributing to better gut integrity as well as stronger nonspecific immune response could also explain the performance improvements (Oliver and Wells, 2015;Long et al., 2016). Recently, it was demonstrated that dietary supplementation of a novel microbial muramidase (Muramidase 007) in broiler diets improved the performance variables, increased the number of lactobacilli without disturbing the rest of the functional microbiota in the caecum of broilers, while proven safe for the chickens in a tolerance study (Lichtenberg et al., 2017). Moreover (Goodarzi Boroojeni et al., 2019) supplementing increasing dosis of this muramidase reported improvements in feed conversion ratio (FCR) and apparent ileal digestibility (AID) of nutrients.
The present study aimed to evaluate the impact of dietary supplementation of Muramidase 007 on the GI functionality of broiler chickens by measuring AID of nutrients, intestinal histomorphology, vitamin A content in plasma, microbial ecosystem and metabolism, and animal performance variables.
MATERIALS AND METHODS
The in vivo experimental procedure of this study was performed at the Experimental Unit of the Universitat Autònoma de Barcelona (UAB) and received prior approval from the Animal and Human Experimental Ethical Committee of this Institution. The treatment, management, housing, husbandry and slaughtering conditions strictly conformed to European Union Guidelines (Directive 2010/63/EU).
Muramidase
The novel muramidase used in this study was selected through a discovery program. The gene coding for this muramidase (denominated Muramidase 007) from the fungus Acremonium alcalophilum (strain 114.92) was expressed in an industrial production host, produced in enough amounts and characterized as described by Lichtenberg et al. (2017). This product was produced by Novozymes A/S (Bagsvaerd, Denmark).
Location and Housing
Animals were housed in one single room with 16 floor pens (8 pens (1.5 m × 1 m) at each side of the room). The environmental conditions (lighting program, temperature, relative humidity and ventilation rates) were controlled accordingly to the Ross broiler management guidelines. Animals disposed of nipple drinkers (3 drinkers/pen) and manual pan feeders (1 pan/pen). Litter material was wood shavings.
Animals and Diets
A total of 480 one-day-old male broiler chickens (Ross 308), were obtained from a local hatchery (Pondex SAU, Lleida, Spain), already vaccinated in ovo against Gumboro and Marek diseases. Additionally, chicks were vaccinated against coccidiosis (coarse spray; Hypracox, Amer, Spain) and infectious bronchitis (fine spray) at day 1 after hatching. The birds were individually wingtagged, weighted and randomly distributed in 16 pens (30 chicks per pen).
Each pen was allocated to one of 2 experimental treatments: A control diet (CTR) or CTR diet supplemented with Muramidase 007 (MUR) at 35,000 muramidase units (LSU(F))/kg. During the experimental period the birds received a feeding program of 2 phases: a starter diet from 0 to 21 d (in crumble form) and a grower diet from 21 to 36 d (in pellet form). Diets were formulated to meet or exceed FEDNA (2008) requirements, as it figures in Table 1. Basal diets did not contain any feed enzymes (other than Muramidase 007 in the experimental diet), coccidiostats, veterinary antibiotics or any other growth promoters. In addition, titanium dioxide (TiO 2 ) was added as indigestible marker at 0.5%.
Experimental Procedures
Individual weight of the animals and feed residuals (by pen) were registered at days 0 (arrival), 9 (first sampling), 21 (change of diet), and 36 (final day-second sampling) to monitor individual body weight (BW) as well as the feed intake along the study. From these values the average daily feed intake (ADFI), average daily weight gain (ADG), and the FCR were calculated. At 9 d of live, 21 birds per cage were randomly selected and euthanized by decapitation, and representative samples of mid-jejunum tissue, ileal digesta and blood, were taken. At 35 d of age, the remaining birds were euthanized, and the same procedure was repeated.
Jejunal tissue was collected from 1 bird per pen for histological analysis. Content from different intestinal segments: crop, ileum and cecum were collected; individually from 2 birds per pen, 1 for classical microbiology and the other for microbiota (1 cecum) and short-chain fatty acids (SCFA) analysis (other ceca). Moreover, representative sample of terminal ileum content of each cage was taken (21 birds at 9 d; 7 birds at 36 d) for AID analyses. Digesta was homogenized, kept frozen at -20°C, freeze-dried, grounded, and kept at 5°C until analyzed.
Pooled sample of blood was sampled via venipuncture (2 or 3 animals on 9 d and individual bird on 36 d from each pen) and was taken in heparinized tubes for vitamin A analyses.
Analytical Methods
AID AID was calculated using the index marker for a 100% recovery. Analytical determinations of feeds and digesta were performed accordingly to the Association of Official Agricultural Chemists (AOAC) International methods (2005): dry matter (Method 934.01) by desiccating the sample in an air-forced oven at 103°C for 24 h. Organic matter (OM) content was gravimetrically measured by placing samples in a muffle furnace at 550°C for 4 h, (ash, Method 942.05), crude fat (Method 2003.05 for feed and with previous acid hydrolysis, following method 954.02 of the AOAC for excreta) and gross energy content (IKA-Kalorimeter system C4000, Staufen, Germany). The fatty acid (FA) content of the feed and digestive content was determined according to the method of Sukhija and Palmquist (1988). This analytic procedure consists of a direct transesterification (the lipid extraction and FA methylation is achieved in only one step). Samples were incubated at 70°C with methanolic hydrochloric acid (a mixture of methanol and acetyl chloride) for the methylation. After the extraction and methylation potassium carbonate and toluene were added to separate the organic layer. Nonadecanoic acid (C19; Sigma-Aldrich Chemical Co., St. Louis, MO) was used as internal standard. The final extract was injected in a gas chromatograph (HP6890, Agilent Technologies, Waldbronn, Germany) following the method conditions previously described by Cortinas et al. (2004). FAs were identified by matching their retention times with those of their relative standards (Supelco 37 component FAME Mix, Sigma-Aldrich Co., St. Louis, MO) and quantified by internal normalization. Nonadecanoic acid was used for the calibration curves and quantification of FA. Total FAs (TFA) content was calculated as the sum of individual FAs. Nitrogen was analyzed by the DUMAS method. Analysis of Ti of the Titanium dioxide (TiO 2 ) marker was performed by atomic absorption spectroscopy (AAS). The AID of dry matter (DM), OM, nitrogen (N), FA, and energy (E) was calculated as follows: Where [Ntr] M is the nutrient content in the ileal digesta, [Ntr] D is the nutrient content in the diet, [Ti] M is the concentration of Ti in the ileal digesta, and [Ti] D is the concentration of Ti in the diet.
Vitamin A Plasma proteins were precipitated with ethanol. Retinol was extracted from the aqueous EVALUATING A DIETARY NOVEL MURAMIDASE IN BROILERS suspension with n-hexane/butylated hydroxytoluene solution. After centrifugation, an aliquot of the organic phase was dried, re-dissolved in solvent solution and injected into a reversed-phase (C18) HPLC system. The detection was performed with a fluorescence detector.
Classical Microbiology and Jejunal Histomorphology Colony forming units (cfu) count of enterobacteria, lactobacilli and clostridia, in the content of crop, ileum and cecum was performed by selective growth medium. Enterobacteria were counted after 24 to 48 h of incubation in MacConkey agar. Lactic acid bacteria were determined in MRS agar and clostridia in SPS Agar used for isolation and enumeration of sulphite-reducing clostridia. To evaluate possible changes in the microbiota metabolism, the concentration of SCFA was analyzed in cecal samples by gas chromatography following the method of Jouany (1982).
Molecular Microbiology DNA from cecal samples was extracted using the QIAamp DNA Stool Mini Kit (Qiagen, Toronto, Canada) according to the manufacturer's instructions. The global structure, dynamics and functionality of the cecal microbial populations was analyzed by high throughput sequencing, targeting V3-V4 region of 16S rRNA, using the kit MiSeq® Reagent Kit v2 (500 cycle) (MiSeq, Illumina, San Diego, CA).
Primers used in the construction of libraries were the following (generating amplicons of putative 460 bp): F-5 -TCGTCGGCAGCGTCAGATGTGTATAAGA GACAGCCTACGGGNGGCWGCAG R-5 -GTCTCGTGGGCTCGGAGATGTGTATAAG AGACAGGACTACHVGGGTATCTAATCC Sequence reads of 16S rRNA gene generated from MiSeq Illumina® system were processed on QIIME v.1.9.1. The quality filter of already demultiplexed sequences was performed at a maximum unacceptable Phred quality score of Q20. Resulting reads were clustered to operational taxonomic unit (OTU) using uclust algorithm with 97% sequence similarity and subsampling pick open reference method (Rideout et al., 2014) at 10% of sequences subsampled. Representative sequences were assigned to taxonomy against bacterial 16S GreenGenes v.13.8 reference database at a 90% confidence threshold and sequence alignment and phylogenetic tree building were obtained through uclust and FastTree. Thereafter, chimeric sequences were removed with ChimeraSlayer with default settings and further quality filtering consisted in removing singletons and OTUs with relative abundance across all samples below 0.005% as recommended by Bokulich et al. (2013).
Statistical Analysis
The results are expressed as means with their standard errors unless otherwise stated. Data were analyzed with ANOVA using the GLM procedure of SAS and the following statistical model: Where Y ij is the dependent variable, µ is the overall mean, B i is the additive effect, and ɛ i is the residual error effect.
When frequencies were analyzed, the Fisher's exact test was used. For the microbiological data, the plate counts values were log-transformed. All statistical analyses were performed using the Statistical Analysis Software SAS version 9.2 (SAS Institute Inc., Cary, NC). The pen was considered as the experimental unit for all the variables analyzed except for classical microbiology and histology. In those analyses, the bird was considered the unit since only 1 bird per pen was sampled. The α level used for the determination of significance for all the analysis was P = 0.05. The statistical trend was also considered for P-values > 0.05 and < 0.10.
Biostatistics of quality filtered sequences of the 16S rRNA gene were performed using an open source software (R v.3.4.0; R Core Team, 2013) after importing the OTU table to R using the phyloseq package. Diversity and ordination analysis (non-multidimensional scaling, NMDS) were performed using the vegan package at OTU level. Richness and alpha diversity were calculated with raw counts while beta diversity and ordination analysis were obtained with relative abundances. Alpha diversity consisted of calculation of three related indexes: Shannon, Simpson and Simpson inverse. For beta diversity, betadiver() function based on Whittaker index was used. To compare any differential effects from treatments, an ANOVA analysis was performed for alpha and beta diversities. Bray-curtis distance matrix was calculated from the dissimilarities between pairs of samples and the relative position of each sample was projected and analyzed in the NMDS plot. Finally, read counts were normalized with the metagenomeSeq package, that uses a cumulative-sum scaling where raw counts are divided by the cumulative sum counts into a particular quantile, by which differential abundance analysis under a zero-inflated Gaussian model could be performed. To achieve that, taxa were aggregated at Phylum and Genus level and expressed as log10 (n + 1).
RESULTS
The trial was successfully carried out and animals showed good health along the whole study. Mortality was very low along the trial being 0.83% and 1.25% for CTR and MUR, respectively with no difference between treatments. Table 2 shows the evolution of BW, ADG, ADFI, and FCR along the study. There were no differences between treatments in the BW or in the ADG although final BW showed numerical values (P = 0.172) with the addition of MUR. Moreover, animals receiving MUR in the diet showed higher ADFI (P = 0.060) and higher FCR (P = 0.050) when all the experimental period was considered.
The AID (%) of E, DM, OM, N, TFA, saturated fatty acids (SFA), mono-unsaturated fatty acids, (MUFA), and poly-unsaturated fatty acids (PUFA) estimated using TiO 2 as digestibility marker are shown in Table 3. The inclusion of MUR in the diets increased the digestibility of E at day 36 (P < 0.001) and shown numerical higher values at day 9 (P = 0.120). The increased amount of digested energy could be the result of the observed increase in the DM and OM digestibility at day 36 (more than 4 percent; P < 0.001). Digestibility of N showed an increase of more than 2% at day 36, although differences did not reach statistical significance (P = 0.095). Digestibility of TFA was higher with MUR but the effects were only evidenced at day 9 (P = 0.049). This increase in the digestibility with the inclusion of MUR in the diets was seen in all studied fractions: SFA (P = 0.08), MUFA and PUFA.
Regarding plasma vitamin A concentration, it was found an increase in total retinol at day 9 with the supplementation of MUR (MUR: 615.6 ng/mL vs. CTR: 521.4 ng/mL; P = 0.040). At day 36, both diets showed higher values, without differences (CTR: 775.5 ng/mL and MUR: 740.9 ng/mL; P = 0.556).
Histomorphological observations in jejunum are shown in Table 4. None of the experimental diets modified the villus height or crypt depth, neither the villus/crypt ratio. Increases (P < 0.05) were detected with the MUR treatment at day 36 in IEL and GC, regardless if they were expressed in absolute or relative terms.
Regarding possible changes induced by the experimental diets on cecal profile and concentration of total SCFA, no statistically significant differences among treatments on the profile, concentration of SCFA or their molar percentage at any of the sampling days were detected (data not shown).
Regarding possible effects on microbiota, neither lactobacilli nor enterobacteria (log cfu/g fresh matter) were modified by the experimental treatments in any gastrointestinal section and any sampling day (Table 5). For clostridia, as plate counts were in some animals below detection level of the method, a frequency analysis was performed (Table 6). By this mean, it could be seen that MUR tended to decrease the number of animals with detectable clostridia in cecum at day 9 (P = 0.060).
The analysis of the cecal microbiota profile by sequencing of the V3-V4 region of the 16S rRNA gene revealed relevant structural changes related to the development of the cecal microbiota with age. Figure 1 shows the community structure based on Bray-Curtis distance matrix with OTUs. A clear impact of the age was observed on cecal microbiota structure being clustered closer microbial structures of day 9 than those of day 36, coincident with numerical higher beta diversity value for the later (data not shown). Interestingly, at day 36 a group of animals from both treatments clustered separately to the rest (4 from CTR treatment and 6 from MUR treatment) with a profile closer to those animals of day 9, suggesting that these animals had not evolved to a mature ecosystem. Nonetheless, animals of day 9 presented higher richness range (600 to 1000 OTU) than those from day 36 (250 to 800 OTU), despite similar number of reads between both ages. Phyla distribution was similar between ages, being Firmicutes (60 and 55% for 9 and 36 d, respectively) and Bacteroidetes (26 and 35% for 9 and 36 d, respectively) the main phyla registered. When analyzing differences in relative abundance of genera (Tables 7 and 8), it was registered a relevant increase in the percentage of Bacteroides with age (from 7 to 29%). Regarding to changes promoted by experimental diets, animals receiving MUR showed a decrease in alpha diversity (Simpson Index (P = 0.052); Shannon index (P = 0.091); Simpson inverse index (P = 0.001)) but no differences were detected in beta diversity. At day 36 no changes were detected in alpha or beta diversity. When differences in abundance of taxa promoted by diets were analyzed, no changes were seen in phyla at day 9, nor at day 36. Only minor changes were found in genera at day 36 (Table 8) with an increase in Lactobacillus (3.28 ± 2.319% vs. 4.90 ± 1.087%) and cc-115 (0.01 ± 0.003% vs 0.15 ± 0.046%) genera when analyzed as normalized counts (P < 0.05). Table 3. Apparent ileal digestibility (%) of energy and nutrients in broiler chickens fed a control diet (CTR) or the same diet supplemented with Muramidase 007 (MUR) during the starter and grower periods.
Effect on Growth Performance and AID
In the present study, the inclusion of the Muramidase 007 in broiler chicken diets, throughout the whole growing period, had a relevant impact on their FCR at the end of the trial. FCR improved from 1.43 to 1.36 g feed: g BW gain (P = 0.045) at day 36. These data agree with those obtained by Lichtenberg et al. (2017) and Goodarzi Boroojeni et al. (2019), who reported a significant improvement in BW gain and FCR when Muramidase 007 was included at different dose lev- Table 5. Plate counts (log cfu/g fresh matter) for different microbial groups in crop, ileum, and cecum digesta in broiler chickens fed a control diet (CTR) or the same diet supplemented with Muramidase 007 (MUR) during the starter and grower periods. els. Other studies have also reported similar effects on growth performance when other muramidases, from different origins, were included on broiler diets. Humphrey et al. (2002) found that chickens fed a conventional diet containing 10% modified rice expressing lysozyme had an improved feed efficiency (P = 0.01). Liu et al. (2010) reported that supplemental hen egg white lysozyme (HEWL) significantly improved FCR and tended to increase weight gain (P = 0.082). Abdel-Latif et al. (2017) also reported that BW gain and FCR were significantly improved when HEWL was included in poultry diets (P < 0.05). However, Gong et al. (2016) reported no effect found on growth variables by the inclusion of HEWL on broiler diets (P > 0.05). This improvement observed in FCR is consistent with the improvement reported in ileal apparent digestibility of energy, DM, and OM that also was reported by Goodarzi Boroojeni et al. (2019) for this muramidase. It can be hypothesized that an improved digestibility of nutrients could be due to a better intestinal absorption (Noy and Skylan, 1995). However, attending to the histomorphological effect of the diets, we were not able to show changes in the height of jejunal villi or crypt depth despite Goodarzi Boroojeni et al. (2019) were able to find an increase in ileal villus length to crypt depth ratio with this same muramidase. However, at the end of the study, we observed an increment on the number of GC in the jejunum of chickens fed the MUR diet. Mucus constitutes a protective barrier against Figure 1. Community structure based on Bray-Curtis distance matrix. Non-metric multidimensional scaling (NMDS) of operational taxonomic unit (OTU) relative abundances in broiler chickens fed a control diet (CTR) or the same diet supplemented with Muramidase 007 (MUR) during the starter and grower periods. Data are clustered by the age of the animal (sampling days at 9 and 36 d of life) and the experimental diets (CTR diet and MUR diet). Black: CTR day 9; blue: MUR day 9; white; CTR day 36; and green: MUR day 36. Table 7. Microbiota composition at the genus level for the 15 most abundant genera in cecal digesta of broilers of 9 d of live receiving control diet (CTR) or the same diet supplemented with Muramidase (MUR). Results are expressed as relative abundance (% ± SE) by diet. No statistical differences were found between experimental diets. microorganisms, physical and chemical attacks, and has the role of lubrication of the digestive tract (Deplancke and Gaskins, 2001) it is considered as a major component of innate immunity. We could therefore hypothesize that the increase observed in GC, and presumably in the mucus layer density, could have facilitated a better nutrient digestion and absorption, explaining the improved efficiency of feed utilization. However it is truth that the role of intestinal mucus is complex and still unknown, and that a higher numbers of goblet cells not necessarily means a higher amount of mucus. Mucus layer is known to provide a niche for bacterial colonization offering attachment site and carbon source for resident bacteria. Both, bacteria and the host can influence the composition for the mucus layer, and mucus can turn in a way to the host to modulate its indigenous microbiota although mucus-derived components could also offer cues for adaptation and pathogenesis of some bacteria (Sicard et al., 2017). Some authors also have described stimulation of mucin productions by molecular patterns such as lipopolysaccharide (Petersson et al., 2011) relating mucin production and inflammation. Taking all this in consideration is therefore simplistic to draw conclusions just from an observed increase in GC.
Our results show that inclusion of the Muramidase 007 increased ileal apparent digestibility of fat (TFA, SFA (P = 0.08), MUFA and PUFA), particularly at day 9 suggesting an early development of intestinal capacity to digest fat nutrient. Accordingly, plasma levels of vitamin A also were increased at day 9 with Muramidase 007, being another indication of a better digestion and absorption of fats. In poultry, the jejunum is the major site of digestion and absorption of fat, with the digestion continuing in the upper ileum (Ravindran et al., 2016). It has been proposed that he immature digestive system of young chickens could lead to the misuse of nutrients (Kaczmarek et al., 2014) and particularly at first ages when chickens show an impaired digestion of fat. According to Kussaibati et al. (1982), the intestinal microbiota would be responsible for the decreased apparent fecal digestibility of vegetable lipids in chickens younger than 3 wk. This reduction in fat digestibility would be the result of the deconjugation of the bile salts by certain bacterial species, in particular lactobacilli. Since the conjugated bile salts serve for the formation of micelles, their low concentration would reduce lipid solubilization and thus their absorption, in particular those containing long chain saturated fatty acids (Gabriel et al., 2005). In this regard, the analysis of the cecal structure by the 16S rRNA gene sequencing revealed important structural changes in the microbiota between days 9 and 36 with significant increases with age in the members of Bacteroides genus.
Effects on Intestinal Architecture and Immune Response
As we have seen before, improvements in AID of nutrients were not related to changes in intestinal histomorphometry in terms of villus: crypt ratio. However, significant changes in the number of GC and IEL were found with supplementation of Muramidase 007 at day 36 that could reflect an intestinal response conditioned to differential luminal stimuli. Changes in the microbiota, or an increase in PGNs from a higher bacterial turnover, or other metabolites, could be behind these effects. Lichtenberg et al. (2017) showed higher number of Lactobacillus in the ceca of the broilers supplemented with Muramidase 007 compared to the un-supplemented broilers. In the present study, we were unable to demonstrate quantitative changes in lactobacilli or enterobacteria by plate counts among treatments. However, analysis by 16S rRNA gene sequencing showed a decrease in alpha diversity at day 9 and also an increase in the relative abundance of sequences from the Lactobacillus genus supplementing Muramidase 007. Therefore, it cannot be discarded that some changes in minor microbial groups or in microbiota structure could have induced those changes in the local immune response.
Changes promoted in IEL numbers were quantitatively low, being within the physiological range and could reflect an activation of the intestinal mucosal immune response of the animals without inducing inflammation. IEL are considered part of the gutassociated lymphoid tissue and provide a first line of defense against intestinal microbial invasion. These frontline lymphocytes residing within the epithelial layer are incredibly heterogeneous regarding their function and phenotype. IEL subsets can provide immune surveillance at the epithelial barrier to prevent or impair infection in the intestine either through innatelike mechanisms or as antigen-specific effectors or memory CD8αβ+ T cells. However, we are only recently starting to unravel the complex and sophisticated network of mucosal immune responses and function and regulation of these cells (Sheridan and Lefrançois, 2011).
Similarly, goblet cells contribute to the protection of the intestinal epithelium by the production and maintenance of the protective mucus blanket by synthesizing and secreting high-molecular-weight glycoproteins known as mucins. Changes in GC functions and in the chemical composition of intestinal mucus have been described in response to a broad range of luminal insults, including alterations of the normal microbiota (Montagne et al., 2004;Sicard et al., 2017). Available data indicate that intestinal microbes may affect GC dynamics and the mucus layer directly via the local release of bioactive factors or indirectly via activation of host immune cells (Deplancke and Gaskins, 2001). Changes promoted by Muramidase 007, directly or indirectly, in intestinal ecosystem or in the release of bioactive factors could therefore be behind the observed effects.
Effects on Gastrointestinal Microbial groups and Cecal Fermentation
Firmicutes and Bacteroidetes were the two main bacterial phyla in the cecum of broilers representing more than 80% of total sequencing. These values are in accordance to values described by other authors using 16S rRNA gene sequencing methodology (Kers et al., 2018). Changes observed with the age of the animals are, however, contradictory to some previous results.
Whereas variation between individuals has been considered to decrease with the age of the animals (Crhanova et al., 2011), in the present study we found an increase of beta diversity with the age of animals accordingly to what has been described by other authors (Ballou et al., 2016). Nevertheless, dynamics and structure of intestinal microbiota is not only affected by age but also by other factors like diet, housing, litter, or climate (Kers et al., 2018).
Including exogenous muramidase into chicken diets, depending on dose and specificity, could have an impact on the intestinal microbiota considering that in the gut, natural endogenous muramidase is part of the primary protective factors in epithelial secretions (Harder et al., 2007;Dave et al., 2016). HEWL is more effective against gram positive bacteria, but also has been demonstrated to kill gram negative bacteria in vitro (Masschalck and Michiels, 2003). In vitro experiments with transgenic human lysozyme have shown activity against several gastrointestinal pathogens, including Clostridium perfringens (Brundige et al., 2008). Liu et al. (2010) showed that the administration of HEWL in broiler diets reduced C. perfringens, Escherichia coli and lactobacilli populations in ileal digesta of birds with and/or without and C. perfringens challenge. In the current study, the plate count results for Lactobacillus and enterobacteria did not show a significant difference with the inclusion of the microbial muramidase, in any, crop, ileum or cecum samples. However, a statistical trend for a decrease was observed for Clostridium in the ileum (P = 0.06) at day 9, suggesting a possible impact of the additive in the microbiota structure at early age, reducing the growth of this particular microbial group. Moreover, the 16 rRNA gene sequencing also was able to show that at day 9 animals receiving Muramidase 007 showed lower alpha diversity although we were not able to detect changes in particular taxonomic groups. Nevertheless, such variations were small and could be the indirect consequence of the modification of the microenvironment in the GI tract by the hydrolysis products of the muramidase activity or by indirect effects associated to the increased digestibility of nutrients.
Differences between results of these studies could be explained by the different types (origin) of the muramidases evaluated: HEWL, microbial, etc., with different properties in vitro and in vivo.
In summary, results of this study demonstrate the positive effects of the inclusion of Muramidase 007 in the diet of broiler chickens resulting in increased feed efficiency and improved FCR. This improvement can be explained by a significant improvement in the ileal digestibility of energy and nutrients. Effects on cecal microbiota and their metabolisms seems to be minimal, despite this some changes could be detected in particular microbial genera, like an increase in Lactobacillus. Additional studies might bring a better understanding of the mechanisms underlying these effects. | 2019-08-11T13:03:28.577Z | 2019-12-30T00:00:00.000 | {
"year": 2019,
"sha1": "1fa9884f6ef04de4a2d5412f1e47f5de82e0d329",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.3382/ps/pez466",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e093a8a38a53f40bd56a3881841bb4aec9c9b1c7",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
251853364 | pes2o/s2orc | v3-fos-license | Particulate Matter in the Korea Train eXpress (KTX) Cabin and its Exposure
This study aims to assess the particulate matter (PM 1 , PM 2.5 , PM 10 ) and black carbon (BC) in the Korea Train eXpress (KTX) cabin during train running, and the personal exposure of PM 2.5 for the female/male passengers who use the KTX 20 days a month to commute. Intensive measurements were made on the day when the outside ambient PM concentration was much higher than usual. To compare with the PM concentration in the subway cabin, a measurement was also performed in some sections of the Seoul Metro subway (from Namyoung Station (hereafter referred to as the “ Sta. ” ) to Jonggak Sta.). The amount of PM 2.5 exposure ( Exposure PM 2.5 (μg)) was calculated for the male/female passengers who regularly board the KTX. The Car - exposure PM 2.5 (μg), which is the amount of PM 2.5 exposure when moving by car in the same section, was also calculated. The PM concentration in the KTX cabin elevated and fallen off at train staying and train running, respectively. The PM 2.5 concentrations inside KTX cabin at the stop station exhibited a remarkable positive correlation with those of outdoor. Compared to the PM concentration measured in the cabin of Seoul Metro subway, PM 1 , PM 2.5 , and PM 10 in the KTX passenger cabin were 74.9%, 73.3%, and 62.7% of those in the cabin of Seoul Metro subway, respectively. The PM 2.5 exposure amount (exposure PM 2.5 (μg)) when moving the same section using the KTX and passenger cars was calculated, and as a result, the exposure PM 2.5 (μg) for both male and female were 5.7 times lower in the KTX than that in car. The mapping result of BC concentration drawn on the KTX line from Iksan Sta. to Gwang-myeong Sta. shows that it fluctuated greatly for each service section or stop station.
INTRODUCTION
The super high-speed train is operated in many advanced countries such as France, Japan, Germany and Spain.Korea also joined the league of the super high express rail by opening the Korea Train eXpress (KTX) in April 2002.Because of its safety, speed, and convenience, the KTX users were 66,128,000 in 2019.This is an increase of 76.4% compared to 2009 with 37,477,000 passengers (Statistics KOREA Government, 2021).
With the increasing of passengers year by year, it is very meaningful to evaluate the air quality in the train cabin and its health effects of passengers, especially those who use it regularly, such as going to work.
Of course, the Korea Railroad Corporation (KORAIL) regularly checks the filters inside and outside the train to manage air quality in the in the train cabin.As shown in Fig. 1, the external air treated by the air purification filter installed at the lower part of the train side flows into train cabin through the vent hole under the window, and internal air is discharged to the lower part of the train.
It can be generally thought that the exhaust emission of CO, NO x and PM from rail transport including highspeed train is less than those from the road transport (Uherek et al., 2010).However, it is absolutely required to evaluate the air quality in the cabin of high-speed train due to pollutants inflow from the outside during opening/closing the door as well as those of the train cabin itself.
According to the Korea's standard for recommending indoor air quality for public transportation vehicles revised in 2013, the standard for PM 10 (average value for one run of the route) is 200 μg/m 3 or less for urban railways and 150 μg/m 3 or less for trains such as KTX.Fortunately, in the indoor air quality recommendation standard for public transportation vehicles under the Indoor Air Quality Management Act, which was implemented in April 2021, the items of PM 2.5 were newly set, and its standard is less than 50 μg/m 3 .
So far, there have been only a few air quality surveys in the KTX cabin (So and Kang, 2006), and it was also about PM 10 .Of course, there is no research on PM 2.5 because the regulation was not established until 2021.
Meanwhile, some studies (Front et al., 2020;Jeong et al., 2017) have reported significant concentrations of nanoparticle and BC, and PM 2.5 in railway environments.But unfortunately, there have been no surveys on the KTX cabin fine/ultra PM (PM 2.5 and PM 1 ) known to be much more harmful to human health.
The goal of this study is to assess the PM (PM 1 , PM 2.5 , PM 10 ) and BC concentrations in the cabin of the running KTX.In addition, as an evaluation of the health effects of PM, the personal exposure of PM for the female/male who regularly uses trains to commute.
1 Field Study Design
The section from KTX Iksan Sta. to Gwangmyeong Sta. was the subject of cabin measurement in this study.Iksan Sta. is one of the main KTX stations where Honam Line and Jeolla Line intersect.Gwangmyeong Sta. is the station where the four KTX lines overlap except the KTX Gangneung Line.Fig. 2 shows the service section of the KTX cabin measurement covered in this study.
In selecting the in-cabin measurement time, the forecasted ambient outdoor air quality near the target service section of the KTX was referenced.Fig. 3 shows the out- door PM 10 and PM 2.5 measured at air quality monitoring station (AQMS) near four KTX stations during February 2020.The PM data of four AQMSs were provided by the Korea National Institute of Environmental Research (https://www.nier.go.kr/NIER/kor/research).
Our intensive measurement was made at noon on February 21 referring to the daily fine PM forecast by the Korea Meteorological Administration (https://www.weather.go.kr/w/index.do).In Fig. 3, as forecasted, relatively high PM concentrations (PM 10 : 63-158 μg/m 3 , PM 2.5 : 52-84 μg/m 3 ) were observed at all four AQMSs between February 21 and 22.For comparative research, the additional PM measurements were also made outside the station of KTX Iksan and in the cabin of Seoul Metro subway (from Namyoung Sta. to Jonggak Sta.) on the same day.
2 Real-time PM and BC Measurements
Mass concentration for the size-resolved PM (PM 1 , PM 2.5 , PM 10 ) was monitored every 5 seconds by the Inc.,V3).This portable device is based on the light scattering to determine the particle size.The scattered light by the PM drawn through a sensing chamber is registered by a detector and converted into the size-resolved PM mass concentrations.The measured data can be wirelessly transmitted to the Air-Casting Android app, which maps and graphs the data on a smartphone via Bluetooth (AirBeam Technical Specifications, Operation & Performance, 2019).This portable small monitor has been used in many previous studies, and its performance has also been proven (Badura et al., 2018;Mukherjee et al., 2017;Jiao et al., 2016).A thorough evaluation of its precision has been carried out by the US EPA by comparing the PM 2.5 data measured by an existing proven device, respectively.According to Jiao et al. (2016), a fairly high correlation (R 2 = 0.99) was recognized between the two sets of data measured by the two devices.
As a BC monitor, the Aethalometer ® (AE51) (Aethlabs, USA) was selected.It can quantify the BC concentration from the attenuated light (a near-infrared with 880 nm wavelength) by the BC accumulation on a special square filter.It can successfully monitor the real-time BC concentration with the good sensitivity (0.001 μg/ m 3 ) and precision (±0.1 μg/m 3 ).
Above monitoring devices were installed in the cabin of KTX-Sancheon train bound for Gwangmyeong Sta.departing from Iksan at 12:15.The KTX-Sancheon train has a total of 10 cars (2 motive power units and 8 passenger cars).Its total length and weight are 201 m and 403t (before passenger boarding), respectively.Total seats in a KTX-Sancheon train set are 375.Monitor devices were installed in the aisle of the 7th seat of train 12 with 52 seats, and the height of the air inlet of the device was located at the height of the seated passenger's nose.Fig. 4 shows the schematic diagram of the train No. 12 (standard class) of the KTX (top), the view of the inside of the passenger cabin (bottom left) where measurements were made, and BC/PM monitors at the 7th aisle (bottom right).
The number of passengers ( 52) was full at Iksan Sta., and there were passengers getting on and off at each station, but the average number of passengers to Gwangmyeong was 48.
During the KTX service, the temperature and relative humidity in passenger cabin were maintained around 21°C and 50%, respectively.
For comparison with PM concentrations in other places, additional on-site measurements were made inside cabins of Seoul Metro Subway and outside the KTX Iksan Sta., respectively on the same day.
3 Calculation of PM Exposure
It is well-known that the deposition of PM 2.5 in the body causes many respiratory diseases such as lung cancer (Wagner et al., 2012).The situation of subway PM and their health hazards have been studied a lot (Ripanucci et al., 2006;Seaton et al., 2005;Chillrud et al., 2004).However, so far, little has been reported on the PM of high-speed trains including the KTX.
In this study, the amount of PM 2.5 exposure (exposure PM2.5 (μg)), which is basic data in evaluating health hazards, was calculated.The target passengers were adult men and women who used the KTX to go to work on weekdays.
where C PM2.5-cabin is the actual measured PM 2.5 concentration at the KTX passenger cabin in each service section, F Dep. is the deposition fraction in the AI region, T Exp. is exposure time (h), and R Bre. is breathing rate (m 3 /h).The F Dep. is the maximum deposition efficiency (%) in the AI region by the activity patterns (Yamada et al., 2007).In this study, the passenger's activity pattern was assumed to a sitting/rest.Assuming that the target passengers used the KTX to commute only on weekdays, the numbers of day used per month and year were set to 20 days and 240 days, respectively.
It will be meaningful to compare the KTX-exposure PM2.5 (μg) with the exposure PM2.5 (μg) of car driver (Carexposure PM2.5 (μg)) during driving in the same section as KTX service by personal car.
The Car-exposure PM2.5 (μg) was calculated by the following equation.It was calculated under the same conditions as the KTX, that is, windows were closed, and the in-vehicle ventilation system was operated.
1 Variation of PM Concentrations in the KTX
Passenger Cabin Fig. 5 shows the mapping results of BC (left) and PM 2.5 (right) concentrations drawn on the KTX line from Iksan to Gwangmyeong.According to these visualized concentrations, both BC and PM 2.5 fluctuated greatly for each service section or stop station.
In general BC is formed through incomplete combustion of fossil fuels in automotive internal combustion engines and other combustion facilities.Therefore, one of the reasons why BC concentration fluctuated so much even though it was measured in a passenger cabin during service is thought to be the inflow of the road traffic BC at each service section and stop station via the ventilation system and/or opening the train door.
Fig. 6 shows the variation of the size-resolved PM concentration at the KTX service section between Iksan and Gwangmyeong Sta.At all kinds of PM sizes, the mass concentration fluctuated significantly and showed high concentrations near each stop station.In the entire ser-vice section, the concentrations of PM 1 , PM 2.5-1 , and PM 10-2.5 in the passenger cabin ranged from 17.4-30.3μg/m 3 with an average of 23.2 μg/m 3 , 3.1-7.0μg/m 3 with an average of 5.7 μg/m 3 , and 10.6-32.7 μg/m 3 with an average of 21.9 μg/m 3 , respectively.It is worth noting that the average PM 1 concentration of whole service section was the highest compared to PM 2.5-1 and PM 10-2.5 .This is because smaller particles can penetrate deeper respiratory systems more easily.Moreover, compared to PM 2.5 , it can stay in the lungs longer and cause more inflammation (Schraufnagel, 2020).
The reason why PM concentrations of all sizes were observed high at all service stops was because the external PM flowed into the passenger cabin during opening and closing the door at each stop station.Meanwhile, a lower PM level during service was probably due to the gravitational deposition and a series of ventilation facilities equipped with filters.
Also, according to Fig. 6, as 54% (39 minutes) of the total KTX operating time (71 minutes) exceeded the recommended standard (50 μg/m 3 ) for PM 2.5 , efforts to reduce PM 2.5 concentration in the KTX cabin are urgently needed.Fig. 7 shows the correlation among PM 1 , PM 2.5 , and PM 10 during the KTX service and stop from Iksan to Gwangmyeong.Although it may be considered natural, the correlation with PM 1 was slightly higher in PM 2.5 than PM 10 during both service and stopping.More meaningful is the fact that the correlation was higher during train stopping than service.Unlike the transportation with an internal combustion engine, there is no possibility of the PM 1 generation by fuel combustion in the KTX, therefore the inflow of external air might have affected the ultrafine PM in passenger cabin.Here, the external inflow may be referred to as an inflow of surrounding ambient atmosphere, but the influence of ultrafine PM generated by the train itself near the stop station may be large.It is well-known that the frictional heat of the train wheels and rails generates a great number of ultrafine particles, mostly iron vapor (Kim and Ro, 2010;Lorenzo et al., 2006).The new fine-particle generation from the frictions between train wheel and rail as well as between brake pad and train wheel was well explained with the visible illustrations by Ma et al. (2015).
In order to evaluate the inflow of PM into the passenger cabin from the outside, the correlation between the PM (PM 2.5 and PM 10-2.5 ) concentration in the passenger cabin and the that measured at the AQMS near four KTX stations was estimated (Fig. 8).In the PM 10-2.5 , the internal and external correlations were not recognized, but in the PM 2.5 , a fairly high correlation (R = 0.79) was shown.According to these results, it can be said that the concentration of fine PM in the passenger cabin was much more affected by external inflow than that of coarse PM.In addition to the coarse PM of outside air, it might be the influence of coarse PM introduced by clothing or shoes of passengers.In addition to this, it might be because the air purification system of the KTX could remove coarse PM more efficiently than fine PM.
Table 1 summarizes the size-resolved PM concentration measured in the passenger cabin of Seoul Metro subway and the KTX during the train operation on the same day.In the case of subways, a slightly higher PM concentration was observed in the underground section than the ground section.The same cases have already been reported on the subways in Taiwan and Japan (Cheng et al., 2019;Ma et al., 2012).The PM contents in Taiwan Metro trains were approximately 20-50% higher during running through the underground than during through on the ground.Meanwhile, in this study, there was no significant difference in PM concentration between the ground and underground sections, probably because the out-door PM concentration on the day of measurement was abnormally high.
The concentrations of PM 1 , PM 2.5 , and PM 10 in the KTX passenger cabin were 74.9%, 73.3%, and 62.7% of those in the cabin of Seoul Metro subway, respectively.One important fact, however, is that the fraction of PM 1 to PM 10 was higher in the KTX than in subway.The fractions of PM 1 to PM 10 were 39.9 and 47.5 in the subway cabin (average of ground and underground) and the KTX cabin, respectively.Moreover, the PM 1 /PM 10 ratio in the KTX cabin (0.48) was slightly higher than that measured in the outside atmosphere of the KTX Iksan Sta.(0.47).
Table 2 shows the levels of PM 10 and PM 2.5 in the passenger cabins of electric powered train in several cities in the world.The PM 10 concentration in the KTX passenger cabin was 2.6 times higher than that of Los Angeles and 2.5 times lower than that of Beijing.In the case of PM 2.5 , its range of other four cities were from 14 to 46 μg/m 3 , and the PM 2.5 in the KTX passenger cabin measured in this study was 25 μg/m 3 .The fraction of PM 2.5 to PM 10 in the KTX cabin (59.5%) was also not extremely high or low compared to other cities.
2 PM Exposure for the Regular Users of
the KTX The calculated KTX-exposure PM2.5 (μg) for the female/ male who used the KTX regularly was summarized in Table 3.In the table, the PM 2.5 of non-episode is the measured PM 2.5 concentration in the same service section on a non-episode day.Although it is a natural result, the KTX-exposure PM2.5 (μg) increased in the section with high PM 2.5 concentration in passenger cabin.Due to the differences of F Dep. and R Bre .between male and female, the KTX-exposure PM2.5 (μg) in the KTX service section was calculated much larger for men than for women.
Table 4 shows the KTX-exposure PM2.5 (μg) and the Carexposure PM2.5 (μg) at the AI region for the female/male users during the round trip in the same section.The KTX-exposure PM2.5 (μg) and the Car-exposure PM2.5 (μg) for female at each service section ranged from 0.9-2.2μg with an average of 1.58 μg and 4.4-14.0μg with an average of 8.98 μg, respectively.The Car-exposure PM2.5 (μg) to male driver increased compared to female driver in proportion to the increase in F Dep. and R Bre. .In terms of whole service section (from Iksan Sta. to Gwangmyeong Sta.), the exposure PM2.5 (μg) for both male and female were 5.7 times higher in cars than in the KTX.One reason might be that the driving time (i.e., T Exp. ) was much longer than that of the KTX, and it is also thought that the in-cabin air quality of the car was more affected by the outside air quality than that of the KTX.
CONCLUSIONS
We assessed the PM concentration in the cabin of the KTX, and the personal exposure of PM 2.5 for the female/ male passengers.In all particle size of PM, the concentration in the passenger cabin of the KTX was relatively low compared to that of the Seoul Metro subway.However, the fraction of PM 1 to PM 10 was higher in the KTX than in subway.The PM 1 /PM 10 ratio in the KTX cabin was also higher than that measured at the outside atmosphere of the KTX Sta.Despite the same mechanisms of PM generating on railroads, the reason the KTX cabin has a higher PM 1 /PM 10 ratio than subway cabin may be due to the KTX's ventilation system located close to rails and wheels.As mentioned earlier, a lot of submicron PM can be easily generated when trains are running and stopping.Therefore, it will be necessary to improve the ventilation system of the KTX and minimize the inflow of external PM, especially ultrafine PM.Above all, an improvement should be made to reduce PM inflow from the outside while the KTX is stopping.Finally, in this study, the results of one intensive measurement were discussed for limited the KTX operation sections, but more specific data will be provided through repeated measurements for other sections in the future.
Fig. 1 .
Fig. 1.Ventilation and air purification system in the KTX train cabin.
Fig. 2 .
Fig. 2. The service section of the KTX train cabin measurement covered in this study.
Fig. 3 .
Fig. 3.The out-door PM 10 and PM 2.5 measured at air quality monitoring stations near four KTX stations during February 2020.
Fig. 4 .
Fig. 4. The schematic diagram of the train No. 12 (standard class) of the KTX (top), the view of the inside of the train cabin (bottom left) where measurements were made, and BC/PM monitors at the 7th aisle (bottom right).
Table 1 .
The size-resolved PM concentration (average±standard deviation) measured in the passenger cabin of Seoul subway and the KTX during the train operation on the same day.(unit:μg/m 3 ) *Seoul Metro Subway
Table 2 .
The levels of PM 10 and PM 2.5 in the passenger cabins of electric powered train in several cities in the world.(unit: μg/m 3 ) The average value of whole KTX service section from Iksan Sta. to Gwangmyeong Sta. * | 2022-08-27T15:23:27.336Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "04aed460997595158e03d3ed07027fdfe1176f25",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5572/ajae.2022.041",
"oa_status": "GOLD",
"pdf_src": "Springer",
"pdf_hash": "16ba7bbe5906436ba07f828a13e735509d3454fb",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
119644004 | pes2o/s2orc | v3-fos-license | Transcendence degree of zero-cycles and the structure of Chow motives
We show how the notion of the transcendence degree of a zero-cycle on a smooth projective variety X is related to the structure of the motive M(X). This can be of particular interest in the context of Bloch's conjecture, especially for Godeaux surfaces, when the surface is given as a finite quotient of a suitable quintic in P^3.
Introduction
Since [3] we know that the generic point, considered as a zero-cycle, plays an important role in the study of algebraic cycles on a smooth projective variety X over a field k, because it can be considered as a specialization of the diagonal carrying the motivic information at large. More precisely, let k be an algebraically closed field, let d be the dimension of X, and let K = k(X) be the function field on X. Consider a pull-back homomorphism Φ : CH d (X × X) → CH d (X K ) induced by the embedding of the generic point η = Spec(K) into X. The kernel of Φ is generated by correspondences supported on Z × X, where Z runs Zariski closed subschemes in X different from X itself, see [10]. Hence, various motivic effects, given originally in terms of correspondences, i.e. cycle classes in CH d (X ×X), can be expressed in terms of zero-cycles on X K , modulo motives of varieties of dimension < d.
Assume, for example, that X is a surface of general type over an algebraically closed field k, and the second Weil cohomology group H 2 (X) is algebraic. Let ∆ X be the diagonal on X × X. Its specialization is the generic point η viewed as a zero-cycle on X K . Fix now a k-rational point P 0 on X. Let Ω be a universal domain containing k and embed K into Ω over k. In the paper we will show, see Corollary 8, that if P η is rationally equivalent to P 0 on X Ω then any point P is rationally equivalent to any other point Q on X Ω , i.e. Bloch's conjecture hold's for X Ω . As Bloch's conjecture is equivalent to finite-dimensionality of the motive M(X Ω ), we see that the above specialization map Φ allows to reformulated motivic effects at large in terms of rational equivalence between two concrete points on X Ω .
Certainly, it is still not easy to prove (or disprove) rational equivalence between the above points P η and P 0 . One of the problems here consists of the lack of rational curves on surfaces of general type with algebraic H 2 (X). However, we believe that any further progress towards Bloch's conjecture must involve analysis of a possibility of an explicit rational deformation of P η into P 0 on the surface X Ω .
The above picture can now be generalized as follows. Let X be a smooth projective variety of dimension d over an algebraically closed field k. To any zero-cycle Z = i n i P i on X one can define its transcendence degree as the maximum of transcendence degrees of the residue fields k(P i ). The transcendence degree of a zero-cycle class α ∈ CH d (X) is the exact lower bound of the transcendence degrees of representatives of α. Then the motive M(X) is a direct summand of motives of varieties of dimensions < d, twisted by Lefschetz motives, if and only if the transcendence degree of any zero-cycle class α ∈ CH d (X) is strictly smaller than d.
A nice thing is that the last assertion is also equivalent to the fact that there exists a point P of transcendence degree d on X Ω , rationally equivalent to a zero-cycle on X whose transcendence degree is strictly smaller than d. More precisely, we prove the following theorem (see Theorem 7 in the text below): For any smooth projective variety X of dimension d over k the following conditions are equivalent: (i) the class of the diagonal ∆ X is balanced; (ii) the Chow motive of X is a direct summand of a sum of motives of varieties of dimension strictly smaller than d; (iii) the transcendence degree of any zero-cycle class on X Ω is strictly less than d; (iv) there exists a closed point on X Ω whose transcendence degree is d but the transcendence degree of its class modulo rational equivalence is strictly less than d.
Some motivic lemma
Below we will use the notation from [7]. In particular, all Chow groups will be with coefficients in Q, unless the cases when integral coefficients will be subscripted by Z. The category of Chow motives CM over a field k will be contravariant. That is, if X and Y are two smooth projective varieties over k, and X is decomposed in to its connected components X j , then the group of correspondences CH m (X, Y ) of degree m from X to Y is a direct sum of the Chow groups where e j is the dimension of X j . The composition of two correspondences f ∈ CH n (X, Y ) and g ∈ CH m (Y, Z) is standard where the dot stands for the intersection of cycle classes in the sense of [6]. We also have a contravariant functor M from the category of smooth projective varieties over k to the category CM sending any variety X to its motive where ∆ X is the diagonal class of X, and any morphism f : X → Y maps to the class of transposition of its graph Further details on Chow motives can be found, for example, in [7].
The next notion we need is the notion of balancing. Let X and Y be two equi-dimensional varieties over k. Similarly to [1], we say that a correspondence α ∈ CH m (X, Y ) is balanced on the left (respectively, on the right) if there exists an equi-dimensional Zariski closed subscheme Z ⊂ X, such that dim(Z) < dim(X) , and an algebraic cycle Γ on X × Y , such that [Γ] = α in CH m (X, Y ) and the support of Γ is contained in Z × X (respectively, in X × Z). The subscheme Z will be called a pan of balancing. We say that α ∈ CH m (X, Y ) is balanced if α = α 1 + α 2 , where α 1 is balanced on the left, and α 2 is balanced on the right.
Balancing was discovered in [3] and [5]. It is a motivic notion and can be restated in purely motivic terms: Lemma 1. Let X and Y be equidimensional smooth projective varieties over k, and let α ∈ CH m (X, Y ). Then α is balanced on the left if and only if there exists an equidimensional smooth projective variety Z over k with Symmetrically, the correspondence α is balanced on the right if and only if there exists an equidimensional smooth projective variety Z over k with Proof. If m = 0 and the closed subscheme Z is smooth, then the lemma is just obvious. Indeed, let i : Z ֒→ X be the closed embedding, and let Γ t i be the transpose of the graph of the embedding i. If α is balanced on the left then it can be considered as a correspondence of degree zero from Z to Y . Therefore, the correspondence α from X to Y is a composition of the correspondence Γ t i with α as a correspondence from Z to Y .
The detailed proof of the lemma when Z is not necessarily smooth and m = 0 is given in [7].
In the next section we will introduce the transcendence degree of a zero cycle on a smooth projective variety and we will show how it is related to balancing of the diagonal, and so the above motivic factorizations from Lemma 1.
Transcendence degree of zero-cycles
First we need to recall some well-known things from the theory of schemes.
Let k be a field, and let X be an algebraic scheme over k. Let k ⊂ K be a field extension. Recall that a K-point on X is a morphism of schemes P : Spec(K) → X over Spec(k). A subextension k ⊂ L ⊂ K is a field of definition of the point P if there exists a morphism Let ξ P be the image of the unique point in Spec(L) with respect to the morphism P , and let k(ξ P ) be the residue field of the point ξ P on the scheme X. Then k(ξ P ) is the minimal field of definition of the point P , i.e. the initial object in the category of fields of definition of the point P , because k(ξ P ) ֒→ L and the above morphism P L factors through the natural morphism Spec(k(ξ P )) → X.
By definition, the transcendence degree of the point P over the ground field k is the transcendence degree of the field k(ξ P ) over k: Thus, the transcendence degree tr.deg(P/k) is the transcendence degree of the minimal field of definition of the point P over the ground field k.
Notice that if k ⊂ L ⊂ K is a field subextension then one has a commutative diagram x x q q q q q q q q q q q q q q q q q q q q q q q Spec(L) Notice that the transcendence degree tr.deg(Q L /L) can be different from the transcendence degree tr.deg(P/k). For example, tr.deg(P k(ξ P ) /k(ξ P )) = 0.
Let Y be the Zariski closure of the schematic point ξ P in X. Then Y is a closed irreducible subscheme in X and It follows, in particular, that Now we are going to introduce the notion of a transcendence degree of a zero-cycle on a variety. Let Ω be a universal domain containing k. Suppose X is an equidimensional variety, and let d be the dimension of X. The following properties of the transcendence degree for zero-cycles follow directly from the above definition.
Lemma 3. Let X be an equidimensional variety over k of dimension d. Then the following is true: (i) for any element α ∈ CH d (X Ω ) one has (iii) given a field subextension k ⊂ K ⊂ Ω and an element β ∈ CH d (X K ), we have an inequality tr.deg(β Ω /k) ≤ tr.deg(K/k) .
Remark 4.
Not any cycle class α ∈ CH d (X Ω ) is equal to β Ω , for some β ∈ CH d (X K ) and K with tr.deg(K/k) = tr.deg(α/k). Let, for example, X be a smooth projective curve of genus at least two. Then there exists a point P of transcendence degree at least two on the Jacobian variety Jac(X) of X over k. Let α be a cycle class in the Chow group CH 1 (X Ω ) 0 of degree zero 0-cycles on the curve X corresponding to the point P under the isomorphism Then tr.deg(α) ≤ 1 because dim(X) = 1. Suppose now that α comes from an element β ∈ CH 1 (X K ) 0 by means of the scalar extension from K to Ω, where tr.deg(K/k) = 1. Since the isomorphism between the Chow group of degree zero 0-cycles and the Jacobian commutes with scalar extensions of the ground field, the point P must be defined over K, which is impossible as tr.deg(P/k) = 2.
We will also use the following fact.
Lemma 5. Let X and Y be two smooth projective equidimensional varieties over k, let d = dim(X), e = dim(Y ) and assume e < d. Let ϕ be a correspondence of degree d − e from Y to X, that is ϕ is a morphism of Chow motives Then for any element α ∈ CH e (Y Ω ) one has tr.deg((ϕ Ω ) * (α)/k) ≤ tr.deg(α/k) .
Proof. Let i n i P i be a zero-cycle on Y Ω , such that Remark 6. Certainly, one can also define the notion of a transcendence degree for all closed irreducible subschemes in X Ω and, respectively, for elements in Chow groups CH p (X C ) of arbitrary codimension p. Moreover, analogs of Lemma 3 (ii) (iii) and Lemma 5 imply that a transcendence degree is also well-defined for elements in Chow groups of Chow motives over k, and that this transcendence degree does not increase under taking push-forwards with respect to morphisms between Chow motives over k. Now we are ready to prove our main statement.
where Y 1 and Y 2 are equidimensional smooth projective varieties over k whose dimensions are strictly less than d, and e is the dimension of the variety Y 2 ; (iii) any element α ∈ CH d (X Ω ) satisfies tr.deg(α/k) < d ; (iv) there exists a closed point P ∈ X Ω , such that tr.deg(P/k) = d and where [P ] is the class of the point P in CH d (X Ω ). Proof.
where α 1 is balanced on the left and α 2 is balanced on the right. By Lemma 1, there exist two equidimensional varieties Y 1 and Y 2 as in (2), and factorizations of α 1 and α 2 , so that α factorizes like this: we see that all elements in CH d (X Ω ) are push-forwards with respect to the morphism Then (iii) follows from Lemma 5 and Lemma 3 (i).
(iv) ⇒ (i) Let i n i P i be a zero-cycle on X Ω , such that tr.deg(P i /k) < d . By definition of a transcendence degree, there are field extensions K ⊂ Ω and K i ⊂ Ω over k, and points such that W Ω = P , (W i ) Ω = P i , the fields K and K i are finitely generated over k with tr.deg(K/k) = d and tr.deg(K i /k) < d .
Let L be the composite of the fields K and K i in Ω. As and all involved Chow groups are with coefficients in Q, one has see [3], page 1.21. Let now V be a smooth irreducible quasi-projective variety over k, such that Then we also have a rational dominant morphism which coincides at the generic point with the morphism W : Spec(K) → X.
Similarly, for each i, we have a smooth irreducible quasi-projective variety V i with k(V i ) = K i , and a rational dominant morphism inducing the morphism W i : Spec(K i ) → X at the generic point. Shrinking the varieties V and V i to Zariski open subsets one can think that the above morphisms f and f i are all regular.
We also need a smooth irreducible quasi-projective variety Z over k with dominant regular morphisms g : Z → V and g i : Z → V i , such that the function field k(Z) coincides with L.
For any regular morphism h let Γ h be the graph of h. Shrinking Z to a non-empty Zariski open subset if necessary, we have that in the group CH d (Z × X), because the analogous rational equivalence holds over the generic point of Z, which is Spec(L), see above.
Let T ⊂ Z be a generic d-dimensional multiple hyperplane section of Z. The scheme T is irreducible by Bertini's theorem, and the restrictions h := g| T : T → W, h i := (g i )| T : T → W i are still dominant. By taking pull-backs in Chow groups with respect to the embedding T × X → Z × X, we obtain Since dim(T ) = d and the composition f h : T → X is dominant, we see that f h is generically finite. Thus, shrinking T to a non-empty open subset, we may assume that the morphism f h is a finite surjective morphism from T onto a non-empty open subset U in X.
Now we use push-forwards in Chow groups with respect to the finite morphism f h × id X : T × X −→ U × X . From the above equality we obtain that Set-theoretically, The closure of ( Since dim(V i ) = tr.deg(K i /k) < d and all the Chow groups are with rational coefficients, we see that ∆ X is balanced.
Remark. The equivalence (i) ⇔ (ii) was actually proved in [8] but we included it in the theorem for the convenience of the reader.
An example
An important thing in Theorem 7 is that (iv) implies (i). Let us illustrate this by an example.
Let X be a smooth projective surface over C, of general type and with p g = 0. Recall, that Bloch's conjecture predicts that for any two closed points P and Q on X C the point P is rationally equivalent to Q. This conjecture is a codimension 2 case of the Bloch-Beilinson paradigma for algebraic cycles, and it is highly inaccessible. It is known for surfaces with the Kodaira dimension < 2, [4], for finite quotients of products of curves, [11], and for surfaces of general type (which are not finite quotients of products of curves) in [9], [2] and [17].
Let now k be the algebraic closure in C of the minimal field of definition of the surface X, and let K = k(X) be the function field of X over k. Let η = Spec(K) be the generic point of X, and let P η be the corresponding K-rational closed point on X K . Theorem 7 implies the following corollary: Corollary 8. Bloch's conjecture holds for X if and only if there exist an embedding of K into C over k, and a k-rational point P on X, such that the above closed K-rational point P η is rationally equivalent to P on X as a variety over C.
This can be made absolutely explicit in the case of Godeaux surfaces, for which Bloch's conjecture was proved by C.Voisin in [17]. Namely, let µ 5 be the group of 5-th roots of the unit in C, and let ǫ be a primitive root in it. The group µ 5 acts on P 3 by the rule: Let f = f (x) be a µ 5 -invariant smooth quintic form in P 3 , and let Y = Z(f ) be the set of zeros of f in P 3 . Since f is µ 5 -invariant, the group µ 5 acts on Y . Assume, in addition, that Y does not contain the four fixed points of the action of µ 5 on P 3 . Then the quotient surface X = Y /µ 5 is non-singular, and it is called a Godeaux surface. It is well known that p g = q = 0 for such X, see [16].
Take now two transcendental complex numbers which are algebraically independent over Q, say e and e π , see [14]. Let α be one of the zeros of the polynomial obtained by substitution of the coordinates e and e π in to the affinized form f . Then P η can be represented as the class of the point (e, e π , α) ∈ C 3 under the quotient-map Y → X.
Then Voisin's result says that the point P η is rationally equivalent to a point in X(Q). The specificity of Corollary 8 is that it says that the above rational equivalence between two single points on X(C) is the only reason for vanishing of the whole Albanese kernel in this situation.
We believe that this observation can be useful in approaching to Bloch's conjecture in some concrete contexts, such as Mumford's fake surface, see [13]. Recall that such surfaces were recently classified in [15].
WARNING. It would be a temptation to find a rational curve through the points P η and P 0 on the Godeaux surface X over C. The first problem is that X is a surface of general type whose discrete invariants vanish, so that one can expect only a few rational curves on X C . But this is not yet the main trouble. The main difficulty is that no rational curves can pass through P η at all.
Indeed, let X be a smooth projective surface over the ground subfield k in Ω. Let P η be a closed point of transcendence degree 2 on X Ω . Suppose there exists a field subextension k ⊂ K ⊂ Ω, a point P : Spec(K) → X with tr.deg(P/k) = 2, and a rational curve C on X K passing through the point P . Let us show that X is uniruled then. Without loss of generality one can assume that K is finitely generated over k. Let Y be an irreducible variety over k, such that K is the function field of Y over k.The rational curve C ⊂ X K induces a morphism φ : P 1 K −→ X K which induces a rational morphism The point p gives a morphism Spec(K) → P 1 × k K over K. This corresponds to some rational section of the projection The morphism Spec(K) → P 1 K → X K → X sends the unique point in Spec(K) in to the generic point of X because tr.deg(p/k) = 2. Therefore, the composition is dominant, where p X is the projection onto X. It follows that the morphism is dominant as well. Moreover, the induced map P 1 K −→ X K gives a birational isomorphism with its image. It follows that this image is a curve in X K . Hence, the map does not factor through the projection P 1 × Y → Y . Hence, at least for one point y ∈ Y the induced map P 1 y −→ X is not constant. Hence, X is uniruled by [12, 1.3.4] Thus, if we could have a rational curve through P η on a smooth projective surface X C , of general type with p g = 0, then immediately we would get a contradiction as such a surface is very far from to be uniruled.
This shows that in order to find a precise rational equivalence between P η and P 0 we need to find more than one curves of genus > 0 on the Godeaux surface X, and rational functions of them, which will provide a suitable zeropoles cancelation for their principle divisors. | 2010-11-28T20:27:23.000Z | 2010-09-08T00:00:00.000 | {
"year": 2010,
"sha1": "42ae10ba0672c8f2949d8246770cb94325882cdb",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1009.1434",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "23a26e9c31bd4609ba2ccf07ee4cee79b28c3238",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
218950990 | pes2o/s2orc | v3-fos-license | BORDERLINE OVARIAN TUMORS IN PREGNANCY
Пограничные опухоли яичников характерны для женщин репродуктивного периода, более чем у трети больных опухоли выявляют в возрасте 15–29 лет, средний возраст при первичной постановке диагноза состаляет 40 лет. Целью исследования было усовершенствовать методы диагностики пограничных опухолей яичников на фоне беременности и определить возможности выполнения органосохраняющего лечения. Обследовано 300 беременных с различными опухолевидными образованиями (ООЯ) и опухолями яичников (ОЯ), из которых 25 имели пограничные эпителиальные опухоли: 22 — серозные, три — муцинозные. До операции проводили УЗИ, определяли концентрацию в сыворотке крови СА-125, sFas, VEGF и IL6. Полученные результаты сопоставляли с морфологическими исследованиями. Проводили органосохраняющее и радикальное хирургическое лечение, при необходимости — химиотерапию. При перекрестном сравнении изучали перинатальные исходы. Обнаружено, что различить доброкачественные опухоли яичников от пограничных (ПОЯ) и злокачественных (ЗОЯ) возможно с помощью УЗИ и логрегрессионных моделей. Уровни VEGF выше 500 пг/мл, IL6 выше 8,1 пг/мл и СА-125 выше 300 ЕД/мл свидетельствуют о высокой вероятности ЗОЯ у беременных. И только морфологическое исследование тканей яичников, полученных независимо от хирургических способов, давало истинное представление о характере опухоли яичников у беременных. Вместе с тем у трех беременных с ОЯ при морфологическом исследовании выявлены участки ткани, характерные как для ПОЯ, так и для ЗОЯ. Таким образом, преобладание начальных форм опухолевого процесса, относительно благоприятное течение и прогноз при ПОЯ позволяют достаточно широко использовать хирургическое лечение щадящего характера с сохранением менструальной функции и фертильности.
. Ovarian tumors/tumor-like formations distribution in accordance with the histological structure, stage (BOTs/ malignant ovarian tumors) and abnormality degree (ovarian cancer) [9][10]. In more than 70% of pregnant women, the tumors are detected during the ultrasound scan in the early stages of gestation (tumor early stages according to the FIGO system). Surgical treatment of malignant ovarian tumors and BOTs in pregnant women is normally performed in the first and second trimesters of pregnancy [5,[11][12], which leads to increased perinatal morbidity and early infant mortality.
The study was aimed to improve methods for BOTs diagnosis in pregnancy and to determine the possibilities of organ preservation treatment.
METHODS
In 2000-2017 a group of 300 pregnant women with various tumor-like formations and ovarian tumors was prospectively examined. Inclusion criteria: pregnant women with tumor-like formations/ovarian tumors diagnosed during I-III trimesters. Exclusion criteria: the woman's refuse to participate in the study; pregnant women with cancer diagnosed before the study; patients with threatened abortion, intrauterine infection, impairments in a fetus diagnosed before the study. The results of the study were evaluated by cross-analysis. The results' distribution in accordance with the morphological structure, tumor stage and the abnormality degree is presented in Fig. 1.
In 76 of 300 pregnant women with ovarian neoplasms, BOTs and malignant ovarian tumors were detected. Of 25 patients with BOTs, 22 patients had serous and 3 had mucinous forms. It should be noted that the study was carried out for a long time and the patients' recruitment was random, not populationbased.
Ultrasonographic examination was performed with the Voluson 530 MT (Kretztechnik; Austria) and Voluson Е8 (General Electric; USA) systems, and the RIC5-9-D (4-9 MHz), С1-5-D (2-5 MHz), RAB4-8-D (2)(3)(4)(5)(6)(7)(8) probes. An ultrasound scan was carried out in 2D and 3D mode, combined with color and energy Doppler mapping, as well as with threedimensional angiography. The color Doppler mapping was used for assessment of the following features: vascularization pattern (tumor periphery, central parts of the tumor, septa, papillary features), the curve of the blood flow velocity analysis together with resistance index (RI) and peak systolic blood flow velocity (cm/s) determination. Of 30 ultrasound signs of tumorlike formations, benign ovarian tumors, BOTs and malignant ovarian tumors, 17 signs appeared to be informative. For ultrasound diagnostics the proposed model was used allowing one to distinguish between benign ovarian tumors, BOTs and malignant ovarian tumors [13]. Our previous studies [14] demonstrated that ovarian tumors in pregnant women had ultrasound signs allowing one to differentiate between benign and malignant ovarian tumors with high accuracy. During the study it was noted that the differences in the ultrasound features of various ovarian neoplasms were significant. When studying the ultrasound signs of malignant epithelial tumors of the ovaries (ovarian cancer), four types of structure, and, which in most important, unique hemodynamic parameters were identified. At the same time, the assessment scale based on the ultrasound signs analysis was created. To evaluate the accuracy of the model, in addition to the actual percentage of correct assignments, the sensitivity (Se) and specificity (Sp) parameters were taken into account.
Molecular biology techniques were applied as follows. Concentration of СА-125 was determined using the enzyme immunoassay test system (Siemens; Germany). Enzyme immunoassay method was used to determine the sFas concentration in the blood serum using monoclonal antibodies, and the VEGF concentration using the reagent kits (R@D; USA). The concentration of IL6 was evaluated by the Sandwich Enzyme-Linked ImmunoSorbent Assay (ELISA) using the reagent kits (R&D; USA).
Different pathologists examined the hematoxylin and eosin stains. The WHO Classification of Tumors of Female Reproductive Organs (2003) was used for morphological diagnosis, since that classification was adopted in the Russian Federation at the time of the study. For immunohistochemical studies, paraffin blocks of 15 pregnant women with BOTs and 10 pregnant women with malignant ovarian tumors were selected. Analysis of the angiogenesis was performed using antibodies to the vascular endothelial growth factor, VEGF, the major signal transducer for angiogenesis (VENTANA; USA), and antibodies to CD31 endothelial marker, the type 1 platelet endothelial When evaluating the expression of CD31 under the microscope with small magnification, first, the areas with the largest number of microvessels were selected. Subsequently, in two separate fields of view with the increased microvasculature density, the number of all positive microvessels was calculated (200fold magnification). The VEGF expression level was evaluated by semi-quantitative method (comparison of staining intensity and number of positive cells) in five fields of view (400-fold magnification). When measuring the staining intensity, unstained cells were assigned score 0, cells with pale yellow staining were assigned score 1, yellow-brown stained cells were assigned score 2, and brown stained cells were assigned score 3. The number of positively stained cells varied: score 0 corresponded to less than 10% of all cells, score 1 corresponded to 10-49% of stained cells, score 2 corresponded to 50-74% of stained cells, score 3 corresponded to over than 75% of stained cells.
The results of both counts were added, the score over 2 was considered positive. In addition, histories and outcomes of pregnancy and childbirth after treatment were studied in 300 patients with ovarian neoplasms.
Statistical analysis was carried out using the SPSS 15.0 software package (IBM; USA). Data were analyzed by the frequency method using the crosstabs. The differences were considered significant when p < 0.05.
RESULTS
The study demonstrated that the examined pregnant women's clinical characteristics did not vary significantly between the groups. Thus, the age of 76 pregnant women with BOTs and malignant ovarian tumors varied in a wide range, from 18 to 45 years. More than 60% of patients were aged 30. Pain in the lower abdomen and impaired function of neighboring organs (9% of cases), increase in abdomen size (10.9% of cases) were registered, and the history of menstruation irregularities (10.9% of cases) and infertility (2.7% of cases) was revealed in pregnant women with BOTs/malignant ovarian tumors. The structure of concomitant extragenital, gynecological pathologies and previous gynecological operations before pregnancy in patients with tumor-like formations/ovarian tumors correlated mainly with age and did not depend on the tumors' morphology.
Among the BOTs histological types the serous types prevailed (22 (88%) patients). Mucinous tumors were detected in 3 (12%) pregnant women. The 28% of patients had bilateral ovarian lesions. In most pregnant patients stage I BOTs were diagnosed (19 (76%) patients). Stage II was revealed in 5 (20%) patients, and stage III was verified only in one patient.
Ultrasonic signs in pregnant women with BOTs matched several morphological types: in 32.6% of patients, mixed tumors with the predominant solid pattern were diagnosed. About 55% of patients had tumors with the predominant cystic component, over 10% of patients had solid tumors. Doppler sonography revealed central and peripheral hypervascularization with low RI values (less than or equal to 0.4) and high values of peak systolic blood flow velocity (over 15 cm/s) obtained during the curve of the blood flow velocity analysis, as well as the mosaic vessels indicating the presence of arteriovenous shunting in the tumor vasculature.
The use of the proposed model for the differential diagnosis of ovarian tumors in pregnant women made it possible to distinguish between tumor-like formations, benign ovarian tumors, BOTs and malignant ovarian tumors (sensitivity was 100%, specificity 92.3%, with an overall accuracy of the model 92.8%). Due to the similarity of images and hemodynamic indicators during the ultrasound scan, it was impossible to distinguish BOTs from malignant ovarian tumors. At the same time, in all patients with neoplasms of the described type, blood vessels were located in the center with a branched network in the septa, solid component, and papillary components. The low-resistance blood flow was revealed.
In pregnant women with BOTs, the СА-125 concentration varied in the range from 24.4 to 361 U/ml in the I trimester, and from 24.1 to 223 U/ml in the II trimester of pregnancy. The level of sFas was 40-200 ng/ml in the I trimester, and 46-180 in the Fig. 3. CD31 expression in the malignant ovarian tumor (×20). Vessels are marked with arrows II trimester. The VEGF concentration varied in the range from 89 to 286 pg/ml in the I trimester, and from 92 to 480 pg/ml in the II trimester of pregnancy. IL6 reached 3.6-12 in the I trimester and 8-40.9 pg/ml in the II trimester.
In patients with malignant ovarian tumors (compared to patients with BOTs) the significant increase of СА-125 and other tumor markers (sFas, VEGF, IL6) levels in blood serum was observed at any time during pregnancy. In the blood of 3 patients with adenocarcinoma of the ovary the СА-125 level was 540-1224.6 U/ml, the sFas level was 180-312.6 ng/ml, the VEGF level was 510-1028 pg/ml, and the IL6 level was 9.8-40.9 pg/ml. The same concentration of molecular factors was observed in the blood of patients with dysgerminoma, mixed germ cell tumor and immature teratoma. In these patients, the СА-125 level exceeded 361 U/ml, the sFas level was above 240 ng/ml, the VEGF level above 490 pg/ml, and the IL6 level above 8.1 pg/ml.
When studying the BOTs morphology (Fig. 2), the features making it possible to distinguish BOTs from benign and malignant ovarian tumors were detected in 22 cases. In 3 cases, the inconsistencies were found in the final histological response of patients diagnosed with serous adenocarcinoma against the background serous borderline tumor. During the second preparations review no elements of the malignant tumor were found.
The borderline serous cystadenoma was a cystic tumor with discohesive wall and the pronounced papillary features which filled the entire inner surface and in 70% of cases were present on the outer surface. BOTs were characterized by the presence of epithelial features with the formation of cell bundles and separation of cells groups simultaneously with strictly ordered branching, in which small papilla came from large, centrally located papillae. Cells of the borderline serous tumors had some features of epithelial and mesothelial differentiation. Ciliated cells similar to cells of the fallopian tube were detected in one third of tumors. Cells with abundant eosinophilic cytoplasm and rounded nuclei resembled mesothelium, they were located on the tops of papilla. Cell nuclei were located basally, oval or round, with -slight atypia, delicate chromatin, and sometimes with pronounced nucleoli. Rare mitoses were detected (usually 4-10 in the fields of view). Psammoma (sand) bodies were revealed in a half of preparations.
Serous carcinomas reached large sizes (up to 20 cm in diameter), they consisted of cysts with serous or sanious contents, filled with soft loose papillary features. The outer surface was smooth with some papillary structures on it. The solid tumors usually had less pronounced pink gray papilla, they were soft or dense depending on the underlying stroma type. At the same time the foci of hemorrhage and necrosis were observed. Under the microscope the serous carcinomas had a papillary structure with solid foci, large round cells with polymorphic hyperchromatic nuclei, clumpy nuclear chromatin pattern and increased nuclear-cytoplasmic ratio, pseudostratified epithelium. Those were characterized by the loss of polarity, no cilia on the cell surface, increased mitotic activity.
The borderline mucinous cystadenoma of the ovary was usually multilocular with a diameter up to 30 cm, it contained the straw-colored liquid or mucus. Morphological examination of the described tumors' preparations revealed areas lined with the multilayered mucinous epithelium of the intestinal type with the villous glandular and papillary features and-slight atypia of cell nuclei.
Mucinous carcinoma differed from the borderline mucinous cystadenoma by the foci with a glands complex arrangement lined with cells with moderate and severe nuclei atypia, mitoses, as well as by the foci of necrosis inside the tumor. CD31 expression (Fig. 3-4) was detected in the tumor stroma in all patients. The average number of CD31-positive vessels in women with BOTs was 36 , and in women with malignant ovarian tumors it was 44 . The evaluated by the semi-quantitative method immunoreactivity for VEGF was scored 5 (4-6) in women with BOTs, and 6 (5-7) in women with malignant ovarian tumors. No significant differences in both markers' expression levels were revealed.
The medical history analysis of pregnant women with BOTs and malignant ovarian tumors showed that those of them who had disseminated tumors underwent the cytoreductive surgery with abortion. The other patients underwent the cytoreductive surgery twice: upon the detection of a tumor and after the cesarean section.
All patients demonstrating signs of ovarian tumor malignization got the midline laparotomy with the curve around the umbilicus on the left. In six patients, diagnostic laparoscopy was performed first, and after that laparotomy and primary lesion removal (due to the suspected ovarian cancer).
The volume of the surgical procedure was determined intraoperatively in accordance with the clinical picture, reproductive history, age, ultrasonography, serum tumor marker levels and express histopathological examination results. During the intervention, surgical tumor staging was performed, as well as the abdomen and pelvic organs revision, greater omentum resection/removal, multiple peritoneal biopsies, taking swabs or ascitic fluid from the abdominal cavity. In patients with mucinous tumors, an appendectomy was carried out. The patients not interested in pregnancy maintenance and fertility underwent the radical surgery (7 patients of 76). At the first stage during pregnancy, 20 patients with BOTs underwent the organ sparing intervention preserving uterus and the healthy ovary fragment. In two patients, the bilateral adnexectomy was performed. In one of them, the borderline tumor was found during the histopathological examination of the resected part of the visually unchanged contralateral ovary (stage IB).
It should be noted that during the histopathological examination of biopsy material or tumor preparations, errors and inaccuracies may occur. Thus, during our study, in three pregnant women with ovarian tumors, morphological examination revealed tissue features characteristic of both BOT and malignant ovarian tumors. The patients were diagnosed with well-differentiated adenocarcinoma of both ovaries against the background of the borderline serous cystadenoma. In one of those patients, bilateral ovarian tumors with signs of malignization and ascites were clinically defined during the weeks 11-12 of pregnancy. In the oncology hospital the Fig. 4. CD31 expression in the borderline ovarian tumor (×20). Vessels are marked with arrows diagnostic laparoscopy (right-sided adnexectomy with express histological examination) was performed, and the borderline cystadenoma was diagnosed. The laparoscopic entry was changed to laparotomy. A midline laparotomy was used for the left ovary biopsy, greater omentum resection and multiple peritoneal biopsies. The differentiated adenocarcinoma developed against the background of a serous borderline tumor with cancer emboli in the lumen of the greater omentum vessels was diagnosed by the morphological examination (ovarian cancer T3cN0M0). Artificial abortion and radical surgery were performed (hysterectomy with left adnexectomy and the subtotal greater omentum resection). The abdominal cavity swabs' cytological studies revealed the adenogenic cancer signs. Prior to the chemotherapy appointment, the interdisciplinary oncological consultation was held due to discrepancy in the cytological and histological studies results interpretation by different specialists. The initial diagnosis was not confirmed. The patient was diagnosed with the borderline tumor of the ovary with noninvasive implants in the greater omentum. It was decided not to use chemotherapy. The patient observed for four years demonstrates no signs of the disease progression.
The results of the patients with borderline tumors treatment were as follows: 3 pregnant women underwent abortion and surgery (panhysterectomy due to the presence of adenocarcinoma together with the serous BOT), 2 women had spontaneous abortion, 10 patients delivered on their own on time, 6 women delivered prematurely by cesarean section due to obstetric indications, in 4 patients the repeated surgery was carried out for restaging.
Later the tumor recurrence was observed in two pregnant women with BOTs. In one of them, diagnosed with serous histological type IA stage tumor in the resected ovarian tissue after the organ preservation surgery, the recurrence was detected in the 5 th year of observation. The morphological examination revealed a well-differentiated adenocarcinoma, followed by a radical intervention supplemented with chemotherapy. In the 2 nd patient, 2 years after the first surgery the recurrence was detected, and the tumor in its histological pattern was similar to the primary tumor (atypical proliferative serous tumor). After the recurrent neoplasm removal, the patient received combined therapy. Both patients remained alive for more than 3 years. Five patients dropped out of the observation. We tracked the long-term effect of treatment in 17 of 25 patients for 3-10 years. All patients were alive at the time of the study. The overall 5-year survival rate was 100%.
In patients with BOTs, 2-5 years after surgery 9 pregnancies occurred, the four of which ended in delivery with a favorable outcome. In three patients, pregnancy ended in spontaneous abortion.
DISCUSSION
Literature data indicate no specific clinical manifestations of BOTs during pregnancy. Doppler ultrasonography used in the model for differential diagnosis has high specificity.
Currently, no molecular factors have been identified that reliably characterize BOTs [2,15]. The use of most tumor markers is limited due to the high variability of their values, including those depending on the gestational age. In our study, the significant increase of the carcinogenesis markers levels over the threshold (VEGF level exceeded 500 pg/ml, IL6 level was above 8.1 pg/ml) was detected in pregnant women with malignant neoplasms of the ovary. The test specificity was 91.5%, and the sensitivity was 75%. The СА-125 concentration in pregnant women with malignant ovarian tumors exceeded 300 U/ml. Our results were consistent with the other authors' data [16]. When evaluating the VEGF expression level in the paraffin blocks by the semi-quantitative method, the increased immunoreactivity for the marker (score 5-7) was detected in ovarian carcinomas. The VEGF expression association with ovarian cancer has been confirmed by many studies. An increase in VEGF immunoreactivity in ovarian carcinoma (compared to BOT) has been proven, while a high VEGF expression level indicates the disease progression [17]. Increased immunoreactivity of CD31 in the malignant ovarian tumors preparations compared to BOT preparations indicates increased blood flow in the tumor tissues due to neovascularization detected in malignant tumors [18].
The main method of the BOTs treatment is surgery (organ preservation or radical approach). Researchers of the world are actively discussing the possibility of ultra-conservative interventions as an organ preservation option leaving the affected with BOT ovarian tissue unchanged after the resection/ cystectomy [2,19]. Adnexectomy on the lesion side with a morphological study of peritoneal swabs and multiple biopsies is considered the optimal intervention volume. The final surgical staging should be performed during cesarean section or after delivery (in case of vaginal birth) [20,21]. We did not use the ultraconservative interventions in our study, 80% of patients with BOTs underwent organ preservation surgery. The restaging surgery was performed in 16% of patients.
Approximately one-third of the patients with BOTs and well-differentiated adenocarcinoma need a final postoperative morphological study using paraffin blocks [2,[22][23][24]. According to some reports, the high overdiagnosis rate in patients with BOTs having the suspicious for ovarian cancer foci leads to an unreasonable overestimation of the surgical interventions volume, even when performing the final histopathological examination in the specialized institutions [3]. According to our results, the morphological response interpretation discrepancies in the differential diagnosis of BOTs and ovarian cancer have been detected in 12% of patients. The diverse BOTs structure and the need for a thorough study of multiple slices are the reason for the strict requirements for the morphologist's qualification and experience. The other researchers hold a similar opinion [3,9,22].
The overall recurrence rate in patients with BOTs varies from 3 to 10%, and the recurrence occurs in 25% of patients with common tumor stages. Our study has revealed recurrence in 8% of patients. According to the literature data, the 5-year survival rate of patients with I-II stage tumors is 98-99%, and in patients with III-IV tumors it is 82-90% [25,26]. Possibly, the high the 5-year survival rate values are associated with the BOTs early stages detection and with the small sample size.
The papers on the study of fertility after the organ preservation treatment report that spontaneous pregnancies occur in 40-72% of patients. The effect of pregnancy on the course of the disease remains unknown [1,2,27,28]. It is worth mentioning, that the reproductive results obtained during our study were pregnancies detected in more than 35% of patients with BOTs diagnosed in pregnancy after the organ preservation surgical interventions. The results obtained made it possible to highlight the following important signs complex in the diagnostic algorithm for pregnant women with suspected malignization of the ovarian tumors: mixed echographic structure with hypervascular supply pattern and low RI values, VEGF value exceeding 500 pg/ml and IL6 value over 8.1 pg/ml, СА-125 concentration exceeding 300 U/ml. However, the similarity of BOTs and malignant ovarian tumors ultrasonic signs did not allow us to distinguish between these types of neoplasms accurately. The diagnosis of BOT is confirmed during the final postoperative morphological examination. The results of the express ovarian tissue histological analysis in frozen sections not always provide true information on the ovarian tumors nature in pregnant women. High 5-year survival rate after the BOTs organ preservation surgical treatment carried out during pregnancy indicates the possibility to use the gentle approach in the treatment of the tumor's early stages.
CONCLUSION
Despite significant scientific and practical interest to BOTs, many problems related to improving the diagnosis and to the treatment of patients in pregnancy have not been resolved. The predominance of the tumor early stages, relatively mild course and favorable prognosis in patients with BOTs make it possible to use the gentle surgical treatment preserving menstrual function and fertility. | 2020-04-30T09:02:07.876Z | 2020-04-01T00:00:00.000 | {
"year": 2020,
"sha1": "36e1927d2c5a0d61c20baa91db0af283efa5a335",
"oa_license": "CCBY",
"oa_url": "https://vestnik.rsmu.press/files/issues/vestnik.rsmu.press/2020/2/2020-2-11_en.pdf?lang=en",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "db101674d4a028b46d872783d5e1ad6ed592d7e9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14084810 | pes2o/s2orc | v3-fos-license | Russian cooperation.
A situation in which a finite set of players can obtain certain payoffs by cooperation can be described by a cooperative game with transferable utility, or simply a TU-game. A (single-valued) solution for TU-games assigns a payoff distribution to every TU-game. A well-known solution is the Shapley value. In the literature various models of games with restricted cooperation can be found. So, instead of allowing all subsets of the player set N to form, it is assumed that the set of feasible coalitions is a subset of the power set of N. In this paper, we consider such sets of feasible coalitions that are closed under union, i.e. for any two feasible coalitions also their union is feasible. We consider and axiomatize two solutions or rules for these games that generalize the Shapley value: one is obtained as the conjunctive permission value using a corresponding superior graph, the other is defined as the Shapley value of a modified game similar as the Myerson value for games with limited communication.
Keywords TU-game · Restricted cooperation · Union closed system · Shapley value · Permission value · Superior graph JEL Classification C71
Introduction
A cooperative game with transferable utility, or simply a TU-game, is a finite set of players and for any subset (coalition) of players a worth representing the total payoff that the coalition can obtain by cooperating. A (single-valued) solution is a function that assigns to every game a payoff vector which components are the individual payoffs of the players. One of the most applied solutions for cooperative TU-games is the Shapley value (Shapley (1953)), which is applied to economic allocation problems in, e.g. Graham et al. (1990), Maniquet (2003), Chun (2006), Tauman and Watanabe (2007), van den Brink et al. (2007), Bergantiños and Lorenzo-Freire (2008), and Ligett et al. (2009).
In its classical interpretation, a TU-game describes a situation in which the players in every coalition S of N can cooperate to form a feasible coalition and earn its worth. In the literature various restrictions on coalition formation are developed. 1 For example, in Myerson (1977) a coalition is feasible if it is connected in a given (communication) graph. In this paper, we consider games in which the collection of feasible coalitions is closed under union, meaning that for any pair of feasible coalitions also their union is feasible. A well-known example of a union closed system is an antimatroid. 2 An example of an antimatroid is an acyclic permission structure where players need permission from (some of) their superiors in a hierarchical structure when they want cooperate with others. Since the concept of union closed system is more general than the notion of antimatroid, games on union closed systems are more general than the games on antimatroids as considered in Algaba et al. (2003Algaba et al. ( , 2004, and, therefore, also more general than the games with acyclic permission structure, considered in Gilles et al. (1992), van den Brink and Gilles (1996), Gilles and Owen (1994) and van den Brink (1997).
In this paper, we define and axiomatize two solutions for games on union closed systems. The first solution is based on games with a permission structure, the other directly applies the Shapley value to some restricted game. Both solutions generalize the Shapley value in the sense that they are equal to the Shapley value when the union closed system is the power set of player set N . First, we apply a method similar as Myerson (1977) to define a solution for games on union closed systems which generalizes the Shapley value for games on antimatroids as axiomatized in Algaba et al. (2003). To do so, a modified or restricted game is defined. This game is obtained by assigning to any non-feasible coalition the worth of its largest feasible subset.
1 For a survey we refer to Bilbao (2000). 2 A collection of feasible coalitions A ⊆ 2 N is an antimatroid if, besides being union closed and containing ∅, it satisfies accessibility meaning that S ∈ A implies that there is a player i ∈ S such that S\{i} ∈ A, see Dilworth (1940).
By union closedness, this largest feasible subset is unique. Then the union rule for games on union closed systems is defined as the Shapley value of this restricted game.
To define the second solution, we define for a union closed system its corresponding superior graph. This is the directed graph that is obtained by putting an arc from player i to player j if every feasible coalition containing player j also contains player i. We then consider the game with permission structure induced by this superior graph, and define the superior rule as its conjunctive permission value. This paper is organized as follows. Section 2 is a preliminary section on cooperative TU-games and games with a permission structure. Section 3 introduces the two solutions for games on union closed systems and in Sect. 4 we provide axioms that can be satisfied by solutions for games on union closed systems. In Sect. 5, we give an axiomatization of the superior rule for games on union closed systems, and in Sect. 6 we axiomatize the union rule. Section 7 contains concluding remarks.
TU-games
A situation in which a finite set of players can obtain certain payoffs by cooperating can be described by a cooperative game with transferable utility, or simply a TU-game, being a pair (N , v) is the worth of coalition E, i.e., the members of coalition E can obtain a total payoff of v(E) by agreeing to cooperate. Since we take the player set N to be fixed, we denote the game (N , v) just by its characteristic function v. We denote the collection of all characteristic functions on N by G N and n = |N | denotes the cardinality of N .
M the class of all monotone TU-games on N .
A payoff vector for a game is a vector x ∈ R n assigning a payoff x i to every i ∈ N . In the sequel, for E ⊆ N we denote x(E) = i∈E x i . A (single-valued) solution f is a function that assigns to any v ∈ G N a unique payoff vector. The most well-known (single-valued) solution is the Shapley value given by For each non-empty T ⊆ N , the unanimity game u T is given by u T (E) = 1 if T ⊆ E, and u T (E) = 0 otherwise. It is well known that the unanimity games form a basis for Harsanyi (1959).
Cooperative games with a permission structure
A game with a permission structure on N describes a situation where some players in a TU-game need permission from other players before they are allowed to cooperate within a coalition. Formally, a permission structure can be described by a directed graph on N . A directed graph or digraph is a pair (N , D) where N = {1, . . . , n} is a finite set of nodes (representing the players) and D ⊆ N × N is a binary relation on N . In the sequel we simply refer to D for a digraph (N , D) and we denote the collection of all digraphs on N by D N . For i ∈ N the nodes in S D (i) := { j ∈ N | (i, j) ∈ D} are called the successors of i, and the nodes in P D (i) := { j ∈ N | ( j, i) ∈ D} are called the predecessors of i. By S D (i) we denote the set of successors of i in the transitive closure of D, i.e., j ∈ S D (i) if and only if there exists a sequence of players (h 1 , . . . , h t ) Note that acyclicity of a digraph D implies that D is irreflexive, i.e., (i, i) ∈ D for all i ∈ N , and that D has at least one node that does not have a predecessor.
A tuple (v, D) with v ∈ G N a TU-game and D ∈ D N a digraph on N is called a game with a permission structure. In this paper, we follow the conjunctive approach as introduced in Gilles et al. (1992) and van den Brink and Gilles (1996) in which it is assumed that a player needs permission from all its predecessors in order to cooperate with other players. 3 Therefore, a coalition is feasible if and only if for any player in the coalition all its predecessors are also in the coalition. So, for permission structure D the set of conjunctive feasible coalitions is given by i.e., the restricted game r c v,D assigns to each coalition E ⊆ N the worth of its largest conjunctive feasible subset of E. Then the conjunctive permission value ϕ c is the solution that assigns to every game with a permission structure the Shapley value of the restricted game, thus
Solutions for games on union closed systems
We consider tuples (v, ), where v is a TU-game on player set N and ⊆ 2 N is a collection of subsets of N . We call such a tuple a game with limited cooperation. In such a game the collection of subsets restricts the cooperation possibilities of the players in N . A set S ⊆ N of players can attain its value v(S) if S ∈ . When S ∈ then not all players are able to cooperate within S, so that v(S) can not be realised. We say that a coalition S ∈ 2 N is feasible if S ∈ . In this paper, we only consider sets of feasible coalitions that are closed under union.
Notice that = {∅, N } is the smallest union closed system and that = 2 N is the largest one. Also notice that every antimatroid is a union closed system by definition. Also the collection of conjunctive feasible coalitions of a permission structure is union closed (see Gilles et al. 1992) and this collection is an antimatroid when the permission structure is acyclic (see Algaba et al. 2004). We assume that the 'grand coalition' N is feasible for notational convenience. The results in this paper can be modified to hold without this assumption if in the axioms we distinguish between players that belong to at least one feasible coalition and those that do not belong to any feasible coalition. Note that by condition 2 in Definition 1 the 'grand coalition' is feasible if every player belongs to at least one feasible coalition. So, instead of assuming that N ∈ we could do with the weaker normality assumption stating that every player belongs to at least one feasible coalition. In the sequel we denote the collection of all union closed systems in 2 N by C N .
A solution for games on union closed systems is a function f that assigns a payoff distribution f (v, ) ∈ R n to every v ∈ G N and ∈ C N . In the following, we introduce two solutions.
For a tuple (v, ), let σ : 2 N → be given by i.e., σ (S) is the largest feasible subset of S in the system . By union closedness this largest feasible subset is unique. Then the restricted game r v, ∈ G N of (v, ) is defined by and thus assigns to each coalition S ⊆ N the worth of its largest feasible subset. Notice that when v is monotone, it holds that for every ∈ C N also the restricted game r v, is monotone, since S ⊆ T implies that σ (S) ⊆ σ (T ). Now, the first solution is the union rule, which is defined similar as the Myerson rule for games with limited communication in Myerson (1977) and the Shapley value for games on antimatroids in Algaba et al. (2003). The union rule, denoted by U , is given by i.e., the union rule assigns to every (v, ) the Shapley value of the restriced game r v, .
The second solution applies the conjunctive permission value to a digraph associated with the union closed system, called the superior graph. For a union closed system ∈ C N , the associated superior graph is the graph that assigns an arc from a player i to a player j if and only if every feasible coalition containing player j also contains player i. So, the arcs can be seen as some kind of dominance relation in the sense that a player is a subordinate of another player if it 'needs' the other player to be in a feasible coalition. For two players i, j ∈ N , i = j, player i is a superior of player j in ∈ C N , if i ∈ S for every S ∈ with j ∈ S. In that case we call player j a subordinate of player i.
Notice that i is a subordinate (superior) of j in ∈ C N if and only if i is a successor (predecessor) of j in D . The next corollary is straightforward.
Corollary 1 Let ∈ C N . If i is a superior of j in , and k is a superior of i in , then k is a superior of j in .
Having constructed the superior graph D of a union closed system , we consider now the set of feasible coalitions of the permission structure D according to the conjunctive approach, and we denote this collection of coalitions by = c D .
Proposition 1 For ∈ C N it holds that ⊆ .
Proof Let S ∈ . By definition of superior it holds that S includes all superiors of i for every i ∈ S. On the other hand it holds that ( j, i) ∈ D if and only if j is superior of i, i ∈ S. It follows that S is feasible for the permission structure D according to the conjunctive approach. Hence ⊆ .
The superior rule, denoted by SUP, is the solution for games on union closed systems given by i.e., the superior rule assigns to every (v, ) the conjunctive permission value of the game v with the induced permission structure D . The following example shows that the union and superior rule are different.
Axioms
In this section, we state several axioms that can be satisfied by solutions for games on union closed systems. The first five axioms are generalizations of axioms used to axiomatize the conjunctive permission value in van den Brink and Gilles (1996) and the Shapley value for games on poset antimatroids in Algaba et al. (2003). First, efficiency states that the total sum of payoffs equals the worth of the 'grand' coalition.
Axiom 1 (Efficiency) For every game v ∈ G N and union closed system ∈ C N , we Additivity is a straightforward generalization of the well-known additivity axiom for TU-games.
Axiom 2 (Additivity) For every pair of cooperative TU-games v, w ∈ G N and union Next, we introduce a generalization of the inessential player property stating that a null player in v whose subordinates in are all null players in v, earns a zero payoff. We say that player
we denote by I (v, ) the set of all inessential players in (v, ).
Axiom 3 (Inessential player property) For every game v ∈ G N and union closed system ∈ C N , we have that f i (v, ) = 0 for all i ∈ I (v, ).
The next axiom generalizes the necessary player property (which holds for monotone TU-games) in a straightforward way, stating that a necessary player in a monotone game earns at least as much as any other player, irrespective of the coalitions in the union closed system. A player i ∈ N is necessary in game v if v(E) = 0 for all E ⊆ N \{i}.
Axiom 4 (Necessary player property) For every monotone game v ∈ G N M and union closed system ∈ C N , we have f i (v, ) ≥ f j (v, ) for all j ∈ N , when i ∈ N is a necessary player in v.
Finally, structural monotonicity is generalized using the superior graph, stating that whenever player i is a superior of player j in the union closed system and the game is monotone, then player i earns at least as much as player j.
Axiom 5 (Structural monotonicity) For every monotone game
In the next section, we show that the superior rule is characterized by these five axioms. The union rule satisfies these axioms except the inessential player property. 5 However, the union rule satisfies the weaker axiom requiring zero payoffs for inessential players only in games where the worth of any coalition equals the worth of its largest feasible subset. 6 Axiom 6 (Inessential player property for union closed games) For every game v ∈ G N and union closed system Of course, also the superior rule satisfies this weaker axiom. Finally, we introduce another axiom that is satisfied by the union rule but not by the superior rule. It states that the payoffs only depend on the worths of feasible coalitions.
Axiom 7 (Independence of irrelevant coalitions) For every pair of cooperative TUgames v, w ∈ G N and union closed system
To show that this axiom is not satisfied by the superior rule, consider again Example 1 and let game w be given by w = r v, . Obviously w(E) = v(E) for all E ∈ . Since the superior graph is given by Then SUP(v, ) = (0, 0, 1, 0) and SUP(w, ) = ( 1 6 , 0, 2 3 , 1 6 ), so the axiom is not satisfied. In Sect. 6 we show that the union rule is characterized by the latter two axioms together with the first four axioms.
Axiomatization of the superior rule
The following theorem characterizes the superior rule for games on union closed systems.
Theorem 1 A solution f for cooperative games on union closed systems is equal to the superior rule SUP if and only if it satisfies efficiency, additivity, the inessential player property, the necessary player property and structural monotonicity.
Proof First, the superior rule satisfies the five axioms. By efficiency of the conjunctive permission value (i.e., i∈N ϕ c showing that the superior rule satisfies efficiency. Additivity, the inessential player property, the necessary player property and structural monotonicity follow from the corresponding axioms of the conjunctive permission value for games with a permission structure, see van den Brink and Gilles (1996, Theorem 3.6).
The proof of uniqueness follows similar steps as the uniqueness proof for the conjunctive permission value in van den Brink and Gilles (1996, Theorem 3.6). Suppose that solution f satisfies the five axioms. Let v 0 be the null game given by v 0 (E) = 0 for all E ⊆ N . The inessential player property then implies that f i (v 0 , ) = 0 for all i ∈ N . Now, consider a union closed system and game w T = c T u T , for some c T > 0 and ∅ = T ⊆ N . Note that w T ∈ G N M . We distinguish the following three cases with respect to i ∈ N : 1. If i ∈ T then the necessary player property implies that there exists a c * ∈ R such that f i (w T , ) = c * for all i ∈ T , and f i (w T , ) ≤ c * for all i ∈ N \T . 2. If i ∈ N \T and T ∩ ({i} ∪ S D (i)) = ∅ then structural monotonicity implies that , and thus with case 1 that f i (w T , ) = c * . 3. If i ∈ N \T and T ∩ ({i} ∪ S D (i)) = ∅ then the inessential player property implies that f i (w T , ) = 0.
From 1 and 2 follows that f i (w T , ) = c * for i ∈ T ∪ P D (T ). Efficiency and 3 then imply that i∈N f i (w T , ) = |T ∪ P D (T )|c * = c T , implying that c * , and thus f (w T , ), is uniquely determined.
Next, consider (w T , ) with w T = c T u T for some c T < 0 (and thus we cannot apply the necessary player property and structural monotonicity since w T is not monotone).
Finally, since every characteristic function v ∈ G N can be written as a linear combination of unanimity games We conclude this section by showing that the five axioms stated in Theorem 1 are logically independent.
1. The solution that assigns to every game on union closed system simply the Shapley value of game v, i.e., f (v, ) = Sh(v), satisfies efficiency, additivity, the inessential player property and the necessary player property. It does not satisfy structural monotonicity. 2. For v ∈ G N and ∈ C N , let v ∈ G N be given by for all E ⊆ N . The solution f (v, ) = Sh(v) satisfies efficiency, additivity, the inessential player property and structural monotonicity. It does not satisfy the necessary player property.
The equal division solution given by f i (v, ) = v(N )
|N | for all i ∈ N , satisfies efficiency, additivity, the necessary player property and structural monotonicity. It does not satisfy the inessential player property. 4. The equal division over essential players, given by satisfies efficiency, the inessential player property, the necessary player property and structural monotonicity. It does not satisfy additivity.
The zero solution given by f i (v, )
= 0 for all i ∈ N satisfies additivity, the inessential player property, the necessary player property and structural monotonicity. It does not satisfy efficiency.
Axiomatization of the union rule
As mentioned in Sect. 4, all axioms that are used to characterize the superior rule in Theorem 1 are also satisfied by the union rule, except the inessential player property. Instead, it satisfies the weaker inessential player property for union closed games and independence of irrelevant coalitions. Replacing in Theorem 1, the inessential player property by the weaker inessential player property for union closed games, and adding independence of irrelevant coalitions, characterizes the union rule. In that case we can do without structural monotonicity. To prove this characterization, we use the following lemma. For ∈ C N and T ⊆ N , we define T = {H ∈ | T ⊆ H } as the set of feasible coalitions containing coalition T .
Lemma 1 For every ∈ C N , T ⊆ N and c ∈ R, there exist numbers δ H ∈ R, H ∈ T , such that r cu T , = H ∈ T δ H u H .
Proof Consider ∈ C N , T ⊆ N and c ∈ R. If T ∈ then T ∈ T , and we have δ T = c and δ H = 0 for all H ∈ T \{T }. If T ∈ , then define and, recursively, for k ≥ 2 Since N is finite there exists an M < ∞ such that T k = ∅ for all k ∈ {1, . . . M}, T M+1 = ∅ and M k=1 T k = T . Since by definition T k ∩ T l = ∅ for all k, l ∈ IN, we have that T 1 , . . . , T M is a partition of the set {H ∈ | T ⊂ H } of feasible coalitions containing non-feasible coalition T . (Note that this set equals T since T ∈ .) Then δ H = c for all H ∈ T 1 and, recursively for k = 2, . . . , M, the numbers δ H , H ∈ T k , are determined by Theorem 2 A solution f for cooperative games on union closed systems is equal to the union rule U if and only if it satisfies efficiency, additivity, the inessential player property for union closed games, the necessary player property and independence of irrelevant coalitions.
1. The superior rule satisfies efficiency, additivity, the inessential player property for union closed games and the necessary player property. It does not satisfy independence of irrelevant coalitions. 2. The solution that assigns to every game on union closed system the weighted Shapley of the restricted game r v, for some exogenous weight system ω ∈ R n with ω i = ω j for some i, j ∈ N , satisfies efficiency, additivity, the inessential player property for union closed games and independence of irrelevant coalitions. It does not satisfy the necessary player property. 3. The equal division solution given by f i (v, ) = v(N ) |N | for all i ∈ N , satisfies efficiency, additivity, the necessary player property and independence of irrelevant coalitions. It does not satisfy the inessential player property for union closed games. 4. The equal division over non-null players, given by where Null(v, ) denotes the set of null players in the restricted game r v, , satisfies efficiency, the inessential player property for union closed games, the necessary player property and independence of irrelevant coalitions. It does not satisfy additivity. 5. The zero solution given by f i (v, ) = 0 for all i ∈ N satisfies additivity, the inessential player property for union closed games, the necessary player property and independence of irrelevant coalitions. It does not satisfy efficiency.
Concluding remarks
In this paper, we introduced two generalizations of the Shapley value to games on union closed systems. The superior rule applies the conjunctive permission value to an associated game with permission structure, while the union rule is obtained as the Shapley value of the restricted game. Both rules satisfy efficiency, additivity, the inessential player property for union closed games, the necessary player property and structural monotonicity. We obtain an axiomatization of the superior rule by strengthening the inessential player property for union closed games to the stronger inessential player property. This stronger property is not satisfied by the union rule. We obtain a characterization of the union rule by adding independence of irrelevant coalitions. In that case we can do without structural monotonicity. As mentioned in Sect. 3, both the superior and the union rule can also be defined and axiomatized without assuming in Definition 1 that the 'grand coalition' N is feasible. By condition 2 in that definition, the players that do not belong to the largest feasible subset of N do not belong to any feasible coalition. Referring to these players as irrelevant players, we can define the superior rule and the union rule by applying these two rules to the game and union closed system restricted to R( ) = {i ∈ N | there is an S ∈ with i ∈ S}, and assigning zero payoff to all irrelevant players. The two rules can be axiomatized for such situations by requiring the axioms for the relevant players, and by adding the irrelevant player property stating that irrelevant players get zero payoff.
The axioms discussed in this paper all are applied to a fixed union closed system . Myerson (1980) characterized the Myerson rule for conference structures using fairness. A straightforward modification of the fairness axiom in Myerson (1977Myerson ( , 1980 states that deleting a feasible coalition S ∈ , such that \{S} is still union closed, changes the payoffs of players in S by the same amount. The superior rule does not satisfy this fairness, but the union rule does. However, the union rule is not the only solution for games on union closed systems that satisfies (component) efficiency, fairness and the irrelevant player property. 7 Axiomatizations using axioms that require us to allow to change the set of feasible coalitions, such as fairness, will be studied in future research.
Another point we like to mention is that the notion of conference structure does not put any condition on the collection of feasible sets, i.e., a conference structure is an arbitrary collection of subsets of N . However, by Myerson (1980)'s definition of connectedness, every single player is connected and thus earns its own worth in the restricted game. So, even if a singleton does not belong to the conference structure, a single player earns its worth in the restricted game. This differs from our approach, in which r v, ({i}) = v({i}) if {i} ∈ , and r v, ({i}) = 0 otherwise. Alternatively, in line with Myerson (1980) we could always take r v, ({i}) = v({i}) irrespective of whether {i} is feasible or not. Because of Myerson's definition of connectedness and thus the restricted game, it does not matter whether a conference structure does or does not contain {i} for any i ∈ N . Consequently, an arbitrary conference structure F yields the same Myerson rule payoffs as the conference structure F ∪ {{i} | i ∈ N }, obtained from F by adding all singleton coalitions. 8 This does not hold for the class of union closed systems. When adding all singleton coalitions to a union closed system , the resulting collection of coalitions ∪ {{i} | i ∈ N } is not a union closed system anymore, since by condition 2 of Definition 1, the unique union closed system containing all singletons is the collection = 2 N . | 2014-10-01T00:00:00.000Z | 2009-07-17T00:00:00.000 | {
"year": 2011,
"sha1": "884224820f9c5c307602b0714011c83ab111a39f",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00199-010-0530-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "884224820f9c5c307602b0714011c83ab111a39f",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
254556393 | pes2o/s2orc | v3-fos-license | Building brand meaning in social entrepreneurship organizations: the social impact brand model
In the face of numerous complex challenges at the ecological, economic, and social levels, Social Entrepreneurship Organizations (SEOs) offer an approach that is both solution-oriented and future-oriented by combining profitability and purpose. However, the achievement of social goals is closely linked to the ability to operate successfully in competitive environments, in which differentiation strategies, in particular the creation of strong and authentic brands, are vital to survival. Although the new paradigm of brand management, the so-called co-creative paradigm, has been extensively researched in recent decades both in the for-profit and non-profit contexts, there is still scarce empirical research addressing the field of SEOs. To exploit the potential that the co-creation paradigm offers for SEOs, our paper introduces a social impact brand model (SIBM), which sheds new light on the design process of social entrepreneurial brand meaning. The findings identify key drivers in creating SEO brands by focusing on a dual-brand core that consists of an impact mission orientation and an entrepreneurial orientation, internal branding activities, the founder's personal brand, and relevant brand (co-)creators. By aligning their brand management activities with the SIBM, SEOs can create brands that have authentic and stable brand meanings while managing stakeholder groups' various expectations.
Introduction
Increasing sensitivity to social problems has set in motion transformation processes at both the societal and the organizational levels. Companies are increasingly focusing on corporate social responsibility (e.g., Milne and Gray 2013;O'Connor and Gronewold 2013;Velte 2021), and the number of organizations dedicated exclusively to solving social problems has also risen sharply (Schofer and Longhofer 2020). However, both streams have their limitations as they either seek to maximize profits (if CSR follows the primacy of the shareholders; Freeman and Liedtka 1991), or to maximize social value. Since Social Entrepreneurship Organizations (SEOs) "bring together logics from different, and often conflicting, fields into a singular organizational form" (Huybrechts et al. 2020, p.3), they offer a route to accomplishing the social mission and gathering financial sustainability simultaneously. As a hybrid concept (Billis 2010), SEOs unify aspects of various categories of organizations (Huybrechts et al. 2020). As a result, the merging of various entrepreneurial worlds leads to an interplay of several market strategies, starting from a range of yet interwoven strategic orientations in SEOs; these range from a social orientation (Martin and Osberg 2007;Cho 2006) to an entrepreneurial orientation (Zahra et al. 2009;Yunus 2008;Kraus et al. 2014) and market orientation (Huybrechts and Nicholls 2012) to approaches for differentiation from the competition, such as brand orientation (Urde 1999;Lückenbach et al. 2019). Consequently, the achievement of social goals is linked to the ability to operate successfully in a competitive environment (Davis et al. 1991;Weerawardena and Mort 2006), in which approaches to achieve competitive advantage becomes relevant. One effective way to gain competitive advantage through differentiation, frequently mentioned in the marketing literature and regarded as a universal approach that is equally relevant to any organization (Napoli 2006), is the concept of branding. Aiming to generate value for different stakeholder groups, e.g., beneficiaries, customers, and investors, SEOs face a unique set of marketing challenges (Roundy 2017a) that are manifested in the diverging expectations of various stakeholders and requires both a market (Zhao and Lounsbury 2016) and a community logic (Santos 2012). Therefore, it is necessary to build and balance authentic brand meanings that reflect both the impact mission and the economic skills necessary to achieve long-term social goals.
The German social enterprise Einhorn (www. https:// einho rn. my/) is an illustrative example of SEOs. It reflects very well the struggles of a social entrepreneur in the tensions between various institutional logics. Through his company Einhorn, the founder Waldemar Zeiler offers vegan and sustainable condoms as well as female sanitary products. The company invests 50% of its profits in social and sustainable projects. When the organization launched a crowdfunding campaign in the Berlin Olympic Stadium in 2019 for a good cause, it earned harsh criticism in the press and social media because that social startup is also associated with profit making (Heuberger, 2019). Prominent satirists also voiced criticism of the general conditions of this event and at the same time questioned the hybrid business model. As the example of Einhorn illustrates, SEO brands are under constant and special scrutiny from stakeholder groups. Since trust plays an overriding role in these stakeholder relationships, branding offers a helpful approach for SEOs to help build legitimacy (Liston-Heyes and Liu 2010;Napoli 2006) and to present a balanced overall picture of SEOs' opposing identities that is also flexible enough to connect with all stakeholders and to build trust. Thus, a holistic branding approach that combines the creation of a brand with monitoring strategies is the key to success.
The classical branding literature has experienced a dramatic shift in recent decades. Previously, brands were considered to have no strategic value; as Urde (1999) argues: "For a long time, the brand has been treated in an off-hand fashion as a part of the product" (p.119). However, based on the growing importance of service brands (Berry 2000;McDonald et al. 2001) as well as corporate brands (Balmer and Gray 2003;Hatch and Schultz 2010), the traditional view of brand management has changed. Researchers termed this "evolution of corporate brand management from an organization-centric view based on control to one rooted in a participative co-created perspective" (Iglesias and Ind 2020, p.710). This new paradigm of brand management, the so-called co-creative paradigm, focuses on brands as a result of social processes and claims that brands and their meaning are not solely created from within the company but co-created by multiple stakeholders. This shift of perspective has stimulated a large and growing body of literature at the intersection of branding and non-profit organizations and encouraged various research projects in this context (Laidler-Kylander and Simonin 2009;Naidoo and Abratt 2018;Boenigk and Becker 2016;Juntunen et al. 2013;Vallaster and Wallpach 2018). Although branding has become increasingly critical in recent non-profit literature that identifies stakeholders as active agents of co-creation (Vallaster and Wallpach 2018), it is an area in which SEOs and their particular set of marketing challenges (Roundy 2017b) have received too little attention from researchers focussing on brand co-creation. Most importantly, there is a lack of research that integrates the multiple stakeholder perspectives and complex social processes in creating strong SEO brands. To best exploit the potential that the co-creation paradigm offers to SEOs, it requires a specific and holistic branding approach that addresses, amongst other factors, the following five issues: (1) How influential are fresh brand management insights (e.g., the co-creative perspective, stakeholder orientation, and role of the social entrepreneur) for brand building in SEOs? (2) How can SEOs exploit these new developments to their advantage? (3) What would a brand management model designed explicitly for SEOs look like? (4) Which components are essential? (5) How can these components be best arranged?
Addressing the lack of insights into brand co-creation in the context of SEOs, our paper is believed to be the first to introduce a social impact brand model (SIBM) that sheds new light on the design process of social entrepreneurial brand meaning. Our study's findings identify key drivers in creating an SEO's brand meaning by focusing on the organization's impact mission orientation, internal branding activities, the founder's brand, and relevant brand (co-)creators. The proposed SIBM implies that the starting point for all branding activities is a dual-brand core that consists of an impact mission orientation and an entrepreneurial orientation. It directly influences the SEO's culture, brand behavior, and communication activities. A unique role is played by the social entrepreneur, whose personal character traits, particularly his or her "personal drive," are associated with how stakeholders perceive the organization as a whole. Effective connectivity with diverse stakeholder groups (e.g., donors, beneficiaries, politicians, or brand communities) is crucial since SEOs are challenged by resource constraints and, therefore, rely on collaborations with external stakeholders that actively influence the creation of brand meaning. In this context, SEOs must credibly demonstrate that they can combine social and business objectives by synchronizing their brand's paradoxical character traits within interaction processes with stakeholders.
Literature review
The concept of social entrepreneurship organizations At least, since the rise of "ethical consumerism" (Carrigan and Attalla 2001;Shaw and Shiu 2002), the concept of social entrepreneurship has been recognized as an object worthy of investigation in research, practice, and policy (Kraus et al. 2014;Gupta et al. 2020;Gandhi and Raina 2018). However, despite the increasing relevance of SEOs and the accompanying definitional debate that accompanies it, there seems to be some confusion about what a social entrepreneur is and does (Dacin et al. 2010). This lack of a common definition raises questions regarding which social or profit-making activities fall within the spectrum of social entrepreneurship (Abu-Saifan 2012). This is mainly due to the hybrid nature of SEOs "that bring together logics from different, and often conflicting, fields into a singular organizational form" (Huybrechts et al. 2020, p.3). However, some key perspectives seem to stand out in the social entrepreneurship literature.
The first perspective refers to the striving for both social and financial outcomes. In this context, Cho (2006, p.36) states that social entrepreneurship is "a set of institutional practices combining the pursuit of financial objectives with the pursuit and promotion of substantive and terminal values". This understanding was further sharpened by Di Domenico et al. (2010, p. 3), who merge the concept of social mission and financial viability in SEOs. Accordingly, SEOs are "organisations that seek to attain a particular social objective or set of objectives through the sale of products and/or services, and in doing so aim to achieve financial sustainability independent of government and other donors". The second perspective focusses on the existence of an innovative spirit (Yunus 2008;Zahra et al. 2009) or an entrepreneurial orientation (Kraus et al. 2017). According to Yunus (2008, p.32), "any innovative initiative to help people may be described as social entrepreneurship. The initiative may be economic or non-economic, for-profit or notfor-profit". The definitional approach by Zahra et al. (2009) also emphasizes the degree of innovation as the core element and starting point of any organizational activity. In line with the third perspective, several authors also emphasize that social entrepreneurs distribute their socially innovative models via market-oriented action to reach broader and more sustainable outcomes (e.g., Huybrechts and Nicholls 2012). Following on from this, more recent literature investigates various marketing strategies in the SEO contexts. Above all, the concept of market orientation has been empirically investigated (Glaveli and Geormas 2018;Lückenbach et al. 2019), but the more recent SEO literature also provides conceptual and empirical insights into the field of brand orientation (Schmidt and Baumgarth 2015;Lückenbach et al. 2019). The present study focuses on the first perspective and understands SEOs, following Di Domenico et al. (2010), as organizations that pursue a social mission and strive for financial independence in the pursuit of that mission. Therefore, this study excludes, for example, non-profit organizations from consideration.
In summary, the above-mentioned definitions and approaches underpin the hybridity and strategic diversity in SEOs by placing a central focus on four main strategic alignments: sociality, innovation, market relatedness, and brand focus. In unifying aspects of various institutional logics (Huybrechts et al. 2020), SEOs must address the needs of various stakeholders, which may be in conflict (e.g., beneficiaries, consumers, investors, government agencies). In doing so, SEOs face the challenge of meeting expectations from the private, social, and public sectors alike without losing credibility. Furthermore, this creates a multitude of contact points with many actors in which the co-creation of brand meaning is negotiated interactively.
Distinguishing features between for-profits and SEOs
Substantial differences exist between SEOs and for-profits, making it unsurprising that SEOs may build brand meaning differently than in for-profit organizations. Trivedi and Stokols (2011) elaborated on differences in three areas: the purposes for their existence, the role of the entrepreneur, and the essential outcomes of the venture. The following section briefly discusses each of these three differences.
SEOs aim to address long-standing unsolved social problems and bring about a positive social change (Mair and Martí 2006). Therefore, the pre-existence of a social problem is the defining feature of SEOs and the primary reason for their existence. On the other hand, businesses look for opportunities to create and satisfy unfulfilled market needs. Whether the market demand is temporally fixed is irrelevant. Crucially, there is a growing market for such needs. However, for social entrepreneurs, this is not decisive. The mere existence of social needs or market failure is sufficient for pursuing social goals (Austin et al. 2006).
The social entrepreneur can be identified as a further distinguishing feature. According to previous scientific findings, he or she exhibits characteristics such as a strong ethical orientation, a high degree of social focus, ambitiousness, and a high capacity for continuous adaptation and creativity (Alter 2006). Although some of these characteristics are also associated with the corporate entrepreneur (Sharir and Lerner 2006), a social entrepreneur uses them differently.
For example, social entrepreneurs use their entrepreneurial skills to create positive social change, whereas corporate entrepreneurs invest his or her entrepreneurial resources in addressing problems that make more "economic sense" (Trivedi and Stokols 2011). Despite that, well-known examples of corporate entrepreneurs, such as Patagonia (www. https:// eu. patag onia. com/), show that profit maximization and meeting social goals are compatible.
Given the outcomes, it is argued that the two kinds of organizations create different forms of social value. For forprofits, social value creation is not the primary goal, whereas for SEOs, it is the reason for their existence. Businesses may create social value indirectly, for example, by creating jobs, but these are indirect means of maximizing economic value. SEOs, on the other hand, create social value both within and beyond the organizational boundary by fostering collaboration, knowledge, and social networks as opposed to competition with other organizations (Trivedi and Stokols 2011). The field of activity of SEOs is characterized by a multitude of stakeholders' interfaces. Reciprocal relationships exist in which SEOs depend on the intrinsic motivation of the collaborators (e.g., funding, patronage, advisory board activities). Thereby, credibility is a critical success factor for social entrepreneurs as it helps to maximize the commitment of relevant stakeholders to the collective social goal (Waddock and Post, 1991).
Supplementing Trivedi and Stokols' remarks (Trivedi and Stokols 2011), it should be mentioned as a distinguishing feature of for-profit organizations that SEOs follow varying and sometimes competing institutional logics (the socialwelfare logic, the commercial logic, and the public-sector logic) (Pache and Chodhury 2012), which implies a huge variety of stakeholder groups with widely varying expectations of the organization. However, the interaction with beneficiaries and investors, a further critical group of stakeholders-namely consumers-(Roundy 2017b) is relevant. In B-to-C for-profits, the buyer and the user of a product and/or service are often the same, whereas in many SEOs, there is a disconnect between the purchaser and the beneficiary (the user of the product and/or service). Finally, the government and government agencies, which according to scientific studies, mainly affect younger SEOs, also play an essential role in funding (Bacq et al. 2013). Consequently, SEOs face the challenge of meeting the expectations of the private, social, and public sectors without losing credibility.
The sum or combination of these differences related to the purposes for their existence, the role of the entrepreneur, the essential outcomes, and the complex and challenging stakeholder system suggest that SEOs typically build their brands differently than for-profits.
Branding in social entrepreneurship organizations
Despite the increasing awareness of branding's relevance for social organizations (Sepulcri et al. 2020), the literature is still in its early stages. In the non-profit context, there are isolated studies that investigate the process of brand value (co-)creation (Laidler-Kylander and Simonin 2009) and the measurement of brand equity (Naidoo and Abratt 2018). Following the dynamic stakeholder-focused brand era, most of the published studies adopt a stakeholder perspective (Boenigk and Becker 2016;Juntunen et al. 2013) and highlight the relevance of social processes in creating brand meaning. Since social organizations are located in various sectors in society, they have a broad spectrum of stakeholders and brand audiences. Therefore, high levels of trust are necessary (Kearns 2014) because they primarily rely on legitimacy and resources from stakeholders, a fact that makes a stakeholder-based perspective relevant in this context.
The main research stream in the emerging field of social branding focuses on non-profit brand equity and investigates how this construct can be measured. One approach was suggested by Faircloth (2005), highlighting the importance of brand personality, brand image, and brand awareness. These findings are consistent with a further study by Juntunen et al. (2013) who, in particular, identified brand awareness as an elementary dimension of non-profit brand equity. In line with the arguments presented by Laidler-Kylander and Simonin (2009) that there is a need to distinguish between non-profit and for-profit brand building, Boenigk and Becker (2016) included communication and relationship-oriented dimensions (brand trust and brand commitment) in their measurement model. A more recent study by Naidoo and Abratt (2018) also questions the full transferability of the (for-profit) dimensions of the non-profit sector's brand equity construct. The authors indicate that "there are multiple and significantly different ways of viewing the value of a social brand" (Naidoo and Abratt 2018, p.11). In conclusion, it can be argued that it is not possible to transfer conventionally brand equity models without adaptation to the social sector.
So far, little attention has been given to the process of building social brands. Only Laidler-Kylander and Simonin (2009) have developed a model that explores brand equity drivers in the non-profit sector. This model proposes that four key variables are essential sources of brand equity in such organizations: consistency, focus, trust, and partnerships. The authors also highlight the importance of internal branding and recommend recognizing and embracing the brand's internal role and encouraging internal brand ambassadors. Those are instrumental in promoting an understanding of the brand and ensuring that its internal and external perceptions align (Laidler-Kylander and Simonin 2009).
Considering the idea of brand orientation (Urde 1999) as well as the identity-based approach (Burmann et al. 2009) to brand management that includes employees as an important internal source of brand equity, a more strategic "inside-out" perspective on brands can also help social organizations to create and protect brand meaning. This is primarily because branding is equally relevant to any type of organization and can lead to notable improvements in performance (Napoli 2006). Within a case study approach, Schmidt and Baumgarth (2015) related the concept of brand orientation to the context of SEOs. Their findings show that brand orientation, including a cultural and a behavioral layer, is a relevant strategic orientation for at least some successful SEOs. This was, for instance, reflected not only by the fact that their management places great value on brand management but also on the idea of "living the brand" through and by all members of the organization. According to the authors, there is, in particular, a great need for research into how brand orientation affects organizational outcomes, such as brand performance.
Recent studies further investigate the co-creation perspective in non-profit organizations and propose a model of brand strategy co-creation that synthesizes the social and contextual dynamics characterizing brand strategy development (Vallaster and Wallpach 2018). In this context, Vallaster and Wallpach (2018) draw attention to the dynamic interplay of stability and adaptation shaped by individual, organizational, and market contexts in brand strategy cocreation. A study by Waldner (2020) sets the social entrepreneur "in the center of attention" and investigates how the presentation of a social entrepreneur's personality influences an organization's reputation. The findings show that social enterprises enjoy better stakeholder perceptions if the social entrepreneur's presentation focuses on society-oriented character traits rather than to business-oriented character traits.
In summary, the literature on brand meaning in social contexts remains in its infancy. Most of the existing studies focus on branding only in non-profit organizations. Besides, most of the current studies provide a framework for measuring non-profit brand equity; however, not much attention has been given to social brands' creation process. This is especially true for SEOs. Given their hybridity, SEOs must address the needs of various stakeholders and face the challenge of meeting expectations from the private, social, and public sectors. So far, no study has been published that explores how brand meaning is created in these hybrid models. The existing brand models in the non-profit and forprofit sectors do not reflect the organizational complexity of SEOs and the specific environments in which they operate.
The co-creative brand paradigm
The branding literature has developed enormously in recent decades and new standards have been established. The traditional product and firm-centric perspective (Aaker 1996) has been seen consumers as simply passive recipients of brand meaning (Prahalad and Ramaswamy 2004). However, based on the growing importance of service brands (Berry 2000;McDonald et al. 2001) as well as corporate brands (Balmer and Gray 2003;Hatch and Schultz 2010), the traditional view of brand management has changed. This development was fueled by the emergence of online communities and social media and challenged traditional corporate brand management approaches (Gyrd-Jones et al. 2013). The emerging branding perspective focuses on brand meaning as the result of social processes and argues that brand meaning, in our increasingly digital and connected world, is co-created by multiple stakeholders (Jones 2005;Merz et al. 2009;Iglesias et al. 2013;Ind and Schmidt 2019). According to Iglesias and Ind (2013), the creation of brand value occurs primarily in the "conversational space" (p.677) between the consumers and the organization through frontline employees and brand interfaces as well as in particular communities. The authors also highlight the relevance of external stakeholders such as suppliers, distributors, business partners, shareholders, journalists, and brand communities in brand meaning co-creation. In this vein, brand meaning is both created within the firm and with other "meaning makers" that are either favorably disposed to the brand or hostile to it (MacInnis and Park 2015). Therefore, in the hyper-connected digital environment, the process of brand meaning creation incorporates stakeholder's feedback, proposals and actions (Kristal et al. 2020). That means firms have to accept a loss of control in the brand meaning creation process. A participative co-created perspective in which multiple stakeholders help build and enrich the brand (Iglesias and Ind 2020) is crucial.
In this context, considerable importance is attached to brand communities (e.g., social media) because of their impact on consumers' perceptions (Muniz and O'Guinn 2001). On the one hand, brand communities can be ideal breeding grounds in which individuals establish relations with each other and with the brand to co-create its meaning (Muniz and O'Guinn 2001;Cova and Pace 2006;Dessart et al. 2015). On the other hand, brand communities also have the power to co-destroy brand meanings, for instance, in so-called anti-brand communities (Cova and White 2010;Dessart et al. 2016). Therefore, such practices in brand meaning creation are associated with risk (Fournier and Alvarez 2013). Considering this loss of control (Muniz and O'Guinn 2001), brand meaning is informed by a highly complex range of influences, some of which can be controlled more than others, which can only be observed and influenced (Jevons et al. 2005).
The new co-creative brand paradigm has also influenced non-profit organizations and stimulated various research projects in this context (Laidler-Kylander and Simonin 2009;Naidoo and Abratt 2018;Boenigk and Becker 2016;Juntunen et al. 2013;Vallaster and Wallpach 2018). Branding has become more and more prominent in recent non-profit literature that identifies stakeholders as active co-creation agents (Vallaster and Wallpach 2018). Although corporate branding has been extensively researched in recent decades both in the for-profit and non-profit contexts, there is still scarce empirical research in the field of SEOs. To the best of our knowledge, there remains a lack of approaches for SEOs to tap into the enormous potential of the co-creative brand paradigm. Most importantly, there is a lack of research that integrates the multiple stakeholder perspectives and complex social processes in creating high levels of brand meaning in SEOs. In this context, brand meaning reflects internal and external stakeholders' perceptions about a brand (Vallaster and Wallpach 2013). On the one hand, it integrates brand identity as the internal perspective of the brand (Balmer and Greyser 2006), on the other hand, it incorporates brand image and brand reputation, which reflect the views of the external audience of the brand (Black and Veloutsou 2017).
Methodology
Given the exploratory nature of our research questions, we applied a qualitative research approach using semi-standardized expert interviews to gain a deep understanding of elements of the brand-building process and their possible interrelationships in SEOs. This method has proven to be suitable, particularly for research in experimental stages (Bogner 2009). The approach has also been beneficially applied in the branding literature when there was no or scant information available due to the innovative nature (e.g., Iglesias et al. 2013;Naidoo and Abratt 2018). To gain theoretical knowledge about the concepts in the area of study and to develop the interview guideline, we focused on the brand management literature in the for-profit and non-profit contexts that in particular discusses the sources of brand meaning (co-)creation (Brodie et al. 2006;Keller 2008;Malhotra et al. 2015;Laidler-Kylander and Simonin 2009;Merz et al. 2009;Iglesias et al. 2013). Based on this prior knowledge, we deductively developed the interview guideline, trying to uncover the views of the interviewees on the evolving role of social entrepreneurial brands, key actors, actions, and interactions in social entrepreneurial brand meaning (co-) creation.
To adequately address our research questions, we selected seven SEOs from multiple sectors (B2C, B2B, and services).
We identified the SEOs through extensive online research and used academic and private networks to win persons with the necessary knowledge to our study. For this purpose, we contacted relevant German networks and impact hubs in the social entrepreneur scene. SEOs that could be counted as best practice cases with a high level of awareness and a strong brand presence were selected. The SEOs surveyed were equally distributed throughout Germany. Our participants were participants in relevant social entrepreneur networks. Two have been members of the Ashoka Fellows and thus acted as pioneers and role models to successfully initiate and drive social innovations. Ashoka Fellows are selected social entrepreneurs supported and mentored by Ashoka and its global networks to maximize their social impact. One SEO was recognized as a national award winner in the category of social engagement in 2018.
To gain a broader view of the branding of the SEOs, secondary data were analysed. For this purpose, websites, newspapers, and social media channels were analyzed and triangulated with the results from the interviews with the SEOs. To further triangulate and validate our findings with a second point of view, we conducted four interviews with social marketers with in-depth knowledge of branding in the context of SEOs. All of the branding experts interviewed had many years of extensive experience in advising SEOs in brand development. We recruited branding experts based on recommendations from the German Social Entrepreneurship Network. In addition, we used personal networks to ensure that relevant experts could be integrated into the study. In addition, we made efforts and subjected one of the organizations surveyed to a more in-depth and multiperspective review to further triangulate the findings. To integrate a 360-degree view into the study, we additionally interviewed two employees, an advisory board member, an investor/advisory board member, and a cooperation partner in addition to the founder. The first field phase, in which we interviewed seven SEOs and two branding experts, was conducted between October 2018 and January 2019. The second field phase, in which we interviewed two branding experts and five stakeholders of a selected SEO, was conducted between October and December 2021 (see Fig. 1).
Before the interviews, the experts were provided with a background on the research and its purpose via email. The interviews (see Table 1) were held via telephone and online video conferences until saturation was achieved (Creswell 2013). The interviews were audio-taped and transcribed. We analyzed and interpreted our data using qualitative content analysis (Mayring and Fenzl 2014). Based on a qualitative data management and analysis program (ATLAS.ti 8), our data were inductively analyzed and interpreted line-by-line using a coding process to identify concepts and properties. These concepts were then grouped into higher order concepts (categories and subcategories) (see Fig. 2). After "And because we have built up quite a right name for ourselves in recent years and are also perceived as a premium provider in other respects, the topic is also essential in this recognition." (SEO2) Role of branding 26 "Critical, very high priority. Because we can clearly distinguish ourselves from similarly positioned competitors through the brand, we have created." (SEO4) "Yes, of course, the whole thing has grown completely, and I'm no longer a lone wolf. For me, it was about saying, 'There's a lot of trash in the river, it doesn't belong there-I want to solve it.' And with this idea, the whole thing has grown, and we have simply written on the flag 'we want to clean the rivers throughout Germany.'" (SEO7)
Mission orientation 34
"On the topic of branding, it's also often the case that the whole social mission should be woven into what you do commercially, like a tapestry-like that." (Consultant 2) "I would say that is often a weakness, that they are much too 'social' and little 'corporate' and thereby weaken their vision because they are struggling to survive, because they want to be as social as possible but do not make profits with it and cannot stay with it." (SEO1) Entrepreneurial orientation 22 "From my perspective, a brand is something you live and fill with life, and that these values are credible to the customer and not just printed on an advertising flyer." (SEO2) Brand behavior 49 "Internally, we have many discussions about branding and authenticity to the outside world, because many new people come in and represent the company to the outside world." (SEO1) "We have built up a set of values over the years. The people who work with us pursue a clearly defined mission with us, so we are very much in agreement about what we want to achieve with it, so there is harmony. The brand brings the whole thing together, a pool of the same values, which can then be shaped accordingly in exhibitions. The brand creates a high level of identity." (SEO3) Brand culture 19 "I think it's essential that the team sees and is not just given something to do, that each individual lives the topic, what the brand is all about, and that you have to create some awareness with it." (SEO2) "It is important to me that we get to grips with the problem, because I can see that it is a huge problem." Social entrepreneur 46 "But of course, you also build up a particular reputation. There is individual recognition. I believe that I now stand for specific values as a person, for someone who has dedicated himself to an idea for 30 years and has not been unsuccessful in advancing it." (SEO3) "So, when a company says, 'we would like to have you as a speaker,' it is also a critical topic. They find it exciting and want to see how the idea came about and what has grown out of it. That is part of the lecture, and everyone always finds that particularly interesting and the topic in question. Many people are moved by the fact that someone has recognized a problem and starts somewhere. Many find the growth from this very exciting and causes some to join." (SEO7) Brand history 19 "The other side is that besides this intention, there is also this point, if I now actually take my work seriously as a social entrepreneur in communication, I have to do without a few methods that are standard in marketing, in online marketing, but are also rather manipulative." (Consultant 1) "But the more significant the whole thing gets, the more critical the brand design is, in my opinion. Because you no longer have so many contact points." (Consultant 2) Brand design 30 "There is still a lack of awareness that you have to work strategically. And further interactions, as I said, something like events. You can work with that. How do you deal with people? Both internally and in communication with customers, in stores, etc. The contact points people, those are all interaction channels, but they're at the very edge of the circle for me. And, if you see this circle in front of you, as just emphasized, from the inside out, then it becomes something." (Consultant 2) Marketing approach 5 "So the point is that I underestimated the importance of social networks for us for a long time. We are only active on Facebook, not overly enthusiastic, with one or two posts per week on average. The importance is becoming more apparent to me, and we will be putting a much greater emphasis on social media in the future. We intend to digitize our training, i.e., offer digital variants, and in this context, social media becomes essential for further dissemination. There will be a significant focus on it in the next three years." (SEO4) Online communities 24 "So far, I think it's been very founder-driven for us, at least. They appear outwardly and represent the brand and tell people about it." (SEO1) Brand ambassadors 38 "Because at some point, things come full circle, and the more often one person has heard and told about the other, the more critical it is. For us, networking is a fundamental key to success." (SEO4) Word of mouth 16 "They are networked with each other. There are groups on Facebook, Tens of thousands." (SEO6) We have our independence in the commercial and the non-commercial environment. We are a hybrid and are perceived as such, but that doesn't create any conflicts. So far, I haven't experienced any conflicts where I've been portrayed in the commercial environment as […] who can't get anything done. I haven't been seen as a hardcore capitalist in the social environment either. In any case, connectivity is enormously essential for all stakeholders." (SEO3) Stakeholder orientation 50 "Often you have to zoom out a bit and think, would this be well received by the majority, or do I serve this one group of people who are thinking about our world anyway, and the rest just not. Then the "impact" is already again much smaller, if you only serve this one group-the mass can not touch, let's say." (SEO1) "In general, you should be authentic; You should be original, that is, it's no use talking about compost heaps if I throw the McDonalds bag out of the window while driving. So, I have to be authentic in any case and in what I do." (SEO6) Cross-sectional dimensions 49 "Also, as high, we are convinced that this claim must also be reflected everywhere." (SEO2) "If you have that, then you have critics. But as long as you make it clear that growth makes sense and serves the cause and the topic, it's not a problem." (Consultant 1) "All press reports are actively evaluated." (SEO4) Brand monitoring 50 "And always, if it is somehow possible, interact at eye level. Of course, don't get involved in any pointless discussions or get into justifications for negative things, but always at least say, thank you for your suggestion. We'll take it up in the round. Things like that definitely make a big difference when you show that you are and have your finger on the pulse." (Consultant 2) "I also believe that entrepreneurs should always clarify their values, their objectives. Again and again, they should also explain what we want and why we are doing this. Carry out such an assessment at regular intervals. This is not just meant in the sense of does it work economically? Do we have a good business model? Are we still committed to the values that we gave ourselves initially?" (Consultant 1) the coding process has been completed, the categories were abductively integrated into our conceptional model. Abduction is intended to help social researcher to be able to make new discoveries in logical and methodologically ways (Reichertz, 2007). Finally, the resulting model was further improved by reviewing it against the literature and discussing it at a scientific conference. Following established procedures developed for the inductive category formation technique of qualitative content analysis (Mayring and Fenzl 2014), we integrated two independent coders in the analysis. To assess inter-rater reliability, we assigned all citations and the code list to a researcher experienced in qualitative research. After a brief introduction to our study and an explanation of the intercoding analysis procedure, we asked him to assign the codes to the citations. This involved 72 codes and 477 citations. As a result, he could correctly allocate 53,9% of all the citations to one of the codes, which can be considered a very good result, considering the high number of codes and the fact that some citations were associated with multiple codes. Both the interviews and the intercoder process were conducted in German. After the coding process was completed, the authors translated the codes and text passages into English. To assess the translation quality and ensure that the target text has the same meaning as the source, the translated text passages were additionally checked for correctness by a native English speaker. Table 2 illustrates exemplary statements concerning the main categories.
Findings
The goal of this study is threefold. First, to address the questions of what value branding has in the context of SEOs; second, to identify components and processes of how brand meaning is co-created in SEOs; and third, to determine an optimal arrangement of these components to create high levels of brand meaning in SEOs. From the analysis and interpretation of the in-depth interviews with social entrepreneurs, branding experts in the field of SEOs, and various stakeholder groups, a brand model for SEOs emerges-the SIBM (see Fig. 3). Concerning our research questions, the following sections discuss the relevance of branding in the field of SEOs and further present the theoretical framework and its specific components that emerged from the fieldwork.
Relevance of branding in SEOs
Concerning our first research question, our findings show that the topic of branding is given a very high priority by all respondents, SEOs, social marketing experts and stakeholders. Due to the high intensity of competition, successful branding enables differentiation from the competition and develops unique selling propositions, including financial assets.
"Critical, very high priority. Because we can clearly distinguish ourselves from similarly positioned competitors through the brand, we have created." (SEO4) "There is an extremely high level of competition, and of course, we are required to be on safe ground from a legal point of view. And this is where trademark protection helps us. It has an incredibly high significance for our survival and differentiation from other competitors." (SEO3)
Equally, through branding, values can be signalled that combine the financial viability with the social mission's achievement, the raison d'être of an SEO. However, branding has a unique feature here, namely the authentic representation of a balanced overall picture of an SEO's opposing identities.
"We have our independence in the commercial and the non-commercial environment. We are a hybrid and are perceived as such, but that doesn't create any conflicts. So far, I haven't experienced any conflicts where I've been portrayed in the commercial environment as […] who can't get anything done. I haven't been seen as a hardcore capitalist in the social environment either. In any case, connectivity is enormously essential for all stakeholders." (SEO3)
Our fieldwork has further shown that these two poles are essential to achieving acceptance among various stakeholders, especially potential investors. Here, self-confident brand management can help to send important messages in the internal relationship with the investor so that no power imbalance arises in this cooperation.
"And at this point, it is essential in brand management that they signal in the first moment with their social vision to me, the sponsor, the investor, the financier, the donor, we can do something here, and we offer you to fulfill your mission with us. [
Identity-driven components of brand meaning creation
From the empirical material, it appears that to build a strong brand it is essential to focus on the impact orientation of any trade. The fieldwork shows that two simultaneous strategic approaches to be pursued are relevant to representing an authentic representation to all stakeholder groups: entrepreneurial orientation and social mission orientation. To leverage branding's potential for SEOs, this interaction needs to be firmly anchored in the SEOs brand identity. As a result, the SIBM starts with a dual brand core (entrepreneurial orientation and social mission orientation). The interviews also revealed that it is crucial to create a shared value base Fig. 3 The social impact brand model that all employees exemplify and, above all, by the founder, reinforced and made visible through narratives and various advertising materials. From our fieldwork, five particular identity-driven components emerged: brand culture, brand behavior, brand design, brand narratives, and the founder's personal brand.
Brand culture is a central component of a brand orientation (Urde 1999) that may also be defined as a specific type of corporate culture or a company's particular mindset (Urde et al. 2013). According to our analysis, the brand culture in SEOs covers values defined as deeply embedded but largely unconscious behaviors. As background variables they are directly associated with the impact mission, as exemplified by the founder, and collaboratively created with all team members. SEOs, which usually consist of a small team in the start-up phase and often beyond, have a unique role in forming a common identity. Therefore, all team members must generate a shared understanding of what the SEO stands for. The culture is reflected in guidelines and rules to prescribe behavior and values. These norms, expressed as the code of conduct, represent the explicit and implicit behavior rules.
"Branding in this sense is essential because it forces us internally to think about particular issues. What do we stand for? How do we want to be perceived? Who do we want to reach? What do I care about, and what are we doing here?" (Consultant 1)
Brand behavior is also a central part of brand orientation (Urde 1999) and reflects the internal anchorage of the brand identity (Urde et al. 2013). From the empirical material, it appears that SEOs can develop brand-oriented behaviors, such as regular internal meetings to analyze and discuss the brand's status and development or frequent communication to increase brand awareness and improve the SEO's image. From our data, two categories emerged: brand activities and brand analysis. Brand activities are closely related to the concept of "living the brand" (Ind 2007) which is also essential for SEOs, both internally and externally. Internally, brand values can be implicitly exemplified and passed on during team events or meetings. Identification with the organization's original values, which at their core subsume the mission statement and the entrepreneurial mindset, should also be perceptible in external relations with the stakeholder groups.
A further central part of brand behaviors is the concept of brand analysis. Our fieldwork showed that it is an essential instrument for developing one's brand and regularly checking whether the brand image is consistent with the brand identity. Through these reflection processes, both within the team and through the integration of feedback from the various stakeholders, an honest comparison of self-image and external image can occur, and a correction can be made in the event of a possible discrepancy.
"Internally, we have many discussions about branding and authenticity to the outside world, because many new people come in and represent the company to the outside world." (SEO1) According to our analysis, the founder plays a prominent role in SEOs. In many cases, he or she has been the driver of the organization's founding to address a social problem he or she has identified. Thus, he or she serves as a role model for his or her team members and significantly transports the organization's values, both internal and external. Since the founders are often the people who primarily appear externally and represent the organization in negotiations with various stakeholder groups, they implicitly influence its identity. Over time, a social entrepreneur builds a personal brand that spills over into the organization. An essential role in this context is played by the "personal drive," which is closely intertwined with the organization's actual mission orientation.
"So, I think with the brand, of course, I also became a brand; you also build a brand as a person, which is very closely related to the product in the end. […] But of course, you also build up a particular reputation. There is individual recognition. I believe that I now stand for specific values as a person, for someone who has dedicated himself to an idea for 30 years and has not been unsuccessful in advancing it." (SEO3)
The importance of the founder for the shaping of the brand was mentioned by all research groups in our sample, and was also strongly emphasized by the external stakeholders interviewed. Especially in the first talks about funding, the impact of the founder and the perceived credibility on the one hand and the associated economic competence on the other hand, are decisive for the economic participation. In addition, the founder's attributes are very strongly associated with organizational perception.
"In general, I would say that with the SEO, similar to others, but even a bit stronger, at least in the first years, of course, the personality of the founder is extremely important. [...] in other words, his own personal presence as a part of the brand, but also always the recommendation from the advisory board, which of course applies to develop a kind of organizational identity from it." (Investor/Advisory Board Member,SEO4) Brand design or brand interfaces includes all the many non-human interfaces through which consumers interact with a brand (Iglesias et al. 2013). Our fieldwork shows that the creation of brand meaning in SEOs also requires consistent management across several non-human interfaces through which consumers interact with a brand. Non-human interfaces enable SEOs to convey their organizations authentically and credibly to the outside world. Given a diverse set of stakeholders, this requires consistent management across all interfaces, including ethical principles in communication (e.g., no consideration of manipulative communication).
"The other side is that, in addition to this intention, there is also the point that if I take my work as a social entrepreneur seriously in communication, I have to do without a few methods that are standard in marketing, in online marketing, but are also rather manipulative […] And I believe that social entrepreneurs have to set themselves limits to a certain extent because they can't use every trick in the book to turn people into customers; they can't use every trick in the book. Perhaps they have to set themselves a few ethical rules at that point. That's quite a difference, I think." (Consultant 1) Brand narratives target the importance of rhetoric in branding. A narrative " [...] is the reflective product of looking back and making sense of stories constructed to make sense of life" (Flory and Iglesias 2010). From the material, it can be deduced that SEOs can use narratives about their founding stories to bolster credibility and seriousness about their social mission. They contribute to an identity as they underscore through narrative why the organization was founded, what social problem is being addressed, and underpin its credibility. As part of the founding story, the founder encounters a social problem. A kind of intuition takes place that brings the problem solution into focus. This persuasive power can be conveyed via narratives and strengthens the organization's relevant stakeholders' confidence to solve the social problem over the long term.
"So, when a company says, 'we would like to have you as a speaker', it is also a critical topic. They find it exciting and want to see how the idea came about and what has grown out of it. That is part of the lecture, and everyone always finds that particularly interesting, in addition to the topic in question. Many people are moved by the fact that someone has recognized a problem and starts somewhere. Many find the growth from this very exciting, and causes some to join in." (SEO7)
Brand interactions and brand meaning co-creation
The analysis of the empirical material shows that SEOs are involved in complex stakeholder networks. This results in a large number of human contact points. Our results emphasize the high importance of the design of these conversational spaces. At the level of brand interactions and brand meaning co-creation, the SIBM identifies brand ambassadors, stakeholder orientation, Word-of-Mouth and online communities as central drivers. However, according to our analysis, three transverse components are relevant in all these brand touchpoints and should be fundamentally observed in communication: transparency, consistency, and authenticity.
Consistency should be reflected in all visual marketing materials (e.g., logo, wordmark, consistent website). Also, SEOs need to communicate both transparently and authentically. A transparent presentation of organizational activities enables the various stakeholders to gain insight into the structures and processes. An authentic communication within all conversational spaces by all organization representatives is a central pre-condition to gaining trust and legitimacy.
"People just like to talk to and about people; that's one of my common sayings. If the social component or the social impact is part of my business model and is relevant for my customers, then, of course, we also want to know if he's serious or if it's just whitewashing and PR talk. The entrepreneur's role and person are critical if it plays such a role and has such significance." (Consultant 1)
Brand ambassadors can be classified as representatives of the organization. They act in the name of a brand (Schmidt and Baumgarth 2018). Our findings identify employees, cooperation partners, and especially the founder of SEOs as brand ambassadors who represent the brand values to the outside world. Within multiple interaction processes with diverse stakeholders, they represent their organization in negotiations about their brand's meaning. In this context, a strong stakeholder orientation is a central approach for SEOs. The fieldwork shows that particularly social entrepreneurs need strong relationship management skills. Since SEOs collaborate with diverse groups of stakeholders with diverging expectations, they must achieve connectivity with all stakeholders. It implies a precise synchronization of their brand's paradoxical character traits depending on the stakeholders' respective expectations. Our results further show that the network of existing cooperation partners is not only an essential factor for the resource mobilization of the SEO, but also a high attraction for the stakeholders themselves due to the social capital within the stakeholder networks, the level of which in turn strongly depends on the on the networkability of the founder. Accordingly, the network itself can be considered as an influencing factor on the design of the brand.
"The Advisory Board is also so attractive; it is a brand in itself because there are so many exciting people on it from politics, science, the media world, and the foundation's purpose. There is a good tone, and it has always been approachable, friendly, polite. That starts from the beginning, so to speak, there is always such a tone in such a meeting, I'm probably anticipating some things, but that is the invisible mark, very, very essential." (Advisory Board Member,SEO4) "This ability to integrate and to create a sense of coownership, which unites us all, is a great gift. It's simply a pleasure to participate." (Investor/Advisory Board Member,SEO4) Social processes have a strong influence on creating brand meaning in SEOs. This is also reflected in the importance of Word-of-Mouth in this context. The various stakeholders pass on their accumulated positive and negative experiences with the organization in their networks. The recipients of these messages also spread the messages in their own networks. The information thus flows within and between networks of customers, beneficiaries, sponsors, and the public. Viral marketing plays a significant role, especially in social networks and online communities. The fieldwork shows that social networks have high strategic relevance for SEOs. The activities currently carried out also focus on posting articles, advertising events, or writing blog entries.
Since social processes strongly influence brand meaning in SEOs, the management of brand meaning in brand communities becomes relevant. Concerning the monitoring of branding activities, the fieldwork shows that the processes are not highly standardized; still, an awareness of this topic is discernible, and a consistent pattern of behavior can be derived. SEOs systematically observe all activities and their reactions on the internet and in online communities. It includes published articles, newspaper interviews, and posts on social networks. They, further, react to what has been observed. This refers particularly to queries and critical remarks.
"We also respond to queries and criticism that pops up from time to time when someone asks why it works like this and like that? And in the case of a donation vote, for example, the question 'Is everything working as it should?' Of course, we respond to that as well." (SEO2) Furthermore, SEOs stimulate feedback processes with stakeholders. Our fieldwork shows that, in this context, an open attitude towards criticism and actively asking for stakeholders' perceptions is the starting point for the convergence of self-image and external image, resulting in a compliant view of the brand. If any divergence between the communicated brand values and the perceived associations is identified, this should be discussed critically within the organization, a recalibration considered and suitable measures implemented.
"If there is a spark of truth in it, you should take it up; you should bring it up for discussion internally and see how you can get back to your actual vision." (Consultant 2) The architecture of the SIBM Based on the empirical material and the concepts already described at the levels of brand core, brand identity, and brand interaction, the SIBM was derived abductively (see Fig. 3). The model consists of three levels. The first level refers to the brand core. According to the SIBM, brand meaning is created from an "inside-out" perspective. The interweaving of a social mission orientation with an entrepreneurial orientation creates an authentic image among the stakeholders. The prerequisite is a deep anchoring in the hybrid organization to allow the formation of brand identity. Therefore, the SIBM integrates a "dual-brand core" that act as the starting point for all branding activities.
To create the second level of the SIBM, the brand identity is then influenced by a brand orientation that consists of a cultural and a behavioral layer. Other brand identity components are brand narratives, the social entrepreneur's personal brand, and the brand design. The social entrepreneur plays a vital role in building brand identity in SEOs. As a manager, he or she has a significant influence on the organizational culture, shaping the employees' behavior in following corporate values. He or she is also often intertwined with the founding story, which leads to his or her personally being part of narratives surrounding the organization. Also, his or her personal brand is strongly associated with the corporate brand.
The third level of the SIBM considers all interaction processes with human representants of the organization. Multiple personal interactions with organization members determine the brand value co-creation to a considerable extent. Employees-and especially the founder-represent the brand values to the outside world. Acting as brand ambassadors, they represent their organization in negotiations about their brand's meaning. Since SEOs collaborate with diverse groups of stakeholders who have diverging expectations, they must generate connectivity with all stakeholders and synchronize their brand's opposing character traits depending on the stakeholders' expectations.
Furthermore, in SEOs, three conditions are relevant in all possible brand touchpoints: Transparency, consistency, and authenticity. All these criteria should be effectively observed in communication. In addition to the consistency dimension, which should be reflected in all visual marketing materials (e.g., logo, wordmark, consistent website), SEOs need to communicate, both transparently and authentically. A transparent presentation of organizational activities enables the various stakeholders to gain insight into the structures and processes. Transparency can be identified as a criterion for resolving the tensions in which SEOs operate, e.g., by communicating transparently how the financial resources are used to fulfil the social mission. As the third transverse component, authenticity should shine through all marketing activities. Given the dual-brand core, it is crucial to address the "Why" of each business decision authentically. Following Dammann et al. (2020, p.1), we understand authenticity in this context as "the process of being in a congruous relationship with self, others, and relevant social norms." To promote the social mission of SEOs, communication should, on the one hand, show a genuine interest in addressing the problem; on the other hand, it should credibly demonstrate that the SEO is capable of addressing the problem sustainably through business competence. In addition to the components and processes that are relevant to creating brand meaning in SEOs, our findings present a systematic approach to monitoring the brand's meaning in SEOs (see Fig. 4). The proposed process integrates the components of: observe, react, act, and calibrate. Concerning the importance of social processes in creating brand value, SEOs can systematically observe all activities and their reactions as a first step (e.g., published articles, newspaper interviews, or posts on social media). The next step concerns the reaction to what has been observed and refers particularly to queries and critical remarks. These must be responded to. The third step correlates very strongly with the construct of brand analysis and can be interpreted as an actively initiated feedback process stimulated by the SEOs. An open attitude towards criticism and actively asking for stakeholders' perceptions is the starting point for the convergence of selfimage and external image, resulting in a coherent view of the SEO's brand. As a fourth step, any divergence between the communicated brand values and the perceived associations should be critically discussed within the organization and a calibration considered.
Discussion
To the extent that branding has been studied scientifically in social entrepreneurship sub-disciplines, theoretical models that address the complexity and ambiguity of social and entrepreneurial action have lacked until now. The scientific literature mainly provides comprehensive approaches and models concerning marketing and branding in for-profit businesses. An important insight in this context is the fact that not only the business is responsible for the development of brand values, but also customers and other stakeholders actively intervene in the process and help shape the brand meaning, which in turn requires a strong stakeholder orientation (Vallaster and Wallpach 2013;Wallpach et al. 2017;Iglesias et al. 2013Ind and Schmidt 2019). The scientific discourse in the field of non-profit branding also makes a strong reference to stakeholder orientation (Laidler-Kylander and Simonin 2009;Naidoo and Abratt 2018;Boenigk and Becker 2016;Juntunen et al. 2013;Vallaster and Wallpach 2018) but contains few studies about the co-creation of the brand (Vallaster and Wallpach 2018). The entrepreneurship literature is also concerned with drivers of branding activities (Abimbola and Vallaster 2007;Krake 2005;Yin Wong and Merrilees 2005;Spence and Hamzaoui Essoussi 2010;Vallaster and Kraus 2011). The emerging research considers personal branding and places the entrepreneur at the center of all business activities, thereby influencing brand development (Spence and Hamzaoui Essoussi 2010).
Since the definition and the purpose of an SEO brand differ between non-profit and for-profit brands, the conventional antecedents of brand meaning formation needed to be reviewed. To accomplish this adequately, seven successful SEOs, five stakeholders and four branding experts in the field were interviewed. Although these two groups had specific departure points on each construct, in most cases, the statements were along the same lines. As expected, it became clear during the analysis that the simple transfer of an existing brand model cannot do justice to the complexity of SEOs and the different expectations of the stakeholders towards the organization. Following this, the results show an interplay of already known constructs from various entrepreneurial contexts, while also adding new components.
Since the goal of this study was not only to clarify the relevance of branding and to identify relevant constructs in brand meaning creation in SEOs, but also to determine an optimal arrangement of these components, we made efforts to build a brand model that was abductively derived. Our introduced SIBM represents a unique holistic brand management approach, especially for SEOs. It brings together the findings of various research streams, taking into account valid qualitative data from social entrepreneurs and marketing specialists in the field. The SIBM represents an "insideout" approach that allows SEOs to create and maintain The core element of an authentic brand identity: a "dualbrand core" that interweaves a social mission with entrepreneurial orientation. On the next level is the brand identity, which is significantly influenced by internal branding activities and the founder's drive. Finally, at the third lever the interaction with the brand takes place. Social entrepreneurs and all employees can transport a balanced image of the various identities in SEOs if they communicate and interact authentically, transparently, and consistently. An essential condition for success is creating connectivity among all stakeholders, which is crucial to appearing credible in the conversational spaces with all stakeholders.
The fieldwork shows that branding plays a highly relevant role in SEOs. However, the understanding of branding needs to be recalibrated for SEOs since impact-oriented branding is the focus of all organizational activities. The intention from which products and services are sold or financing projects are addressed differs fundamentally from for-profit contexts, in which needs are often generated among the stakeholder groups they often do not need. This has implications on several levels that apply specifically to SEOs. First, brand ambassadors of SEOs-especially to the founderare confronted with very high expectations imposed by the diverse stakeholder groups. They must always credibly demonstrate that they are seriously interested in addressing a social or societal problem and at the same time demonstrate that they have the economic competence to implement a sustainable business model. The interviewees describe an excessive expectations, especially in regard to the integrity and credibility of SEO employees and especially the founder. They are therefore under constant and critical observation by stakeholders and the danger of a lasting loss of trust due to unethical conduct, for example, is always present. This distinguishes them from for-profits and should be considered when designing SEO brands. Second, the interviews underpinned the critical role of the social entrepreneur in building brand meaning in SEOs. All the research groups in our sample mentioned the founder's importance and that was strongly emphasized by the external stakeholders interviewed. On the one hand, his or her enthusiasm and credibility are relevant to investors and cooperation partners; on the other hand, as a networker and "conductor" of the stakeholder network, he or she has the task of bringing various stakeholders to the table, enabling multiple synergies. The quality of the social capital residing in these networks mainly depends on the founder. In our case study, the network itself was identified as a brand. Third, since SEOs are involved in a complex stakeholder network, they have to deal with much more complex stakeholder management than in for-profit organizations. In this context, informal contact points are particularly relevant, in which they have to manage varying expectations imposed on the organization to establish connectivity with all stakeholder groups. Concerning internal branding, our results finally show that employees in SEOs are engaged out of intrinsic motivation: they act from conviction. For the design of internal branding in SEOs, this means creating sensitivity, especially for the relevance of economic goals among all employees.
Borrowing from the multidimensionality in SEOs, the SIBM provides a concrete explanatory approach to how brand meaning can be created and maintained in this specific context. It goes beyond existing commercial and social brand models by providing a particular starting point for managing brand meaning, namely the intertwining of mission orientation with entrepreneurial orientation, which are fused to form the brand core of SEOs. Based on 16 expert interviews conducted, we claim this study provides initial insights to understand the emergence of SEO brands with a strong and authentic brand meaning. It contributes to an understanding of branding at the intersection of commercial and social organizations that takes into account the organization's brand identity and the different associations of stakeholders to the organization.
Compared to existing brand models in SEOs, the presented SIBM contributes to dealing with the control of brand meaning. Provided that a strong brand is created, a systematic monitoring approach helps SEOs keep the meaning of the brand on track. The fieldwork has shown that SEOs can control their brand's meaning through a four-step procedure, although it should be stated that SEOs must accept a degree of loss of control over the brand. The proposed process integrates the components of observe, react, act, and calibrate.
Implications for research and practice
Our study offers fresh insights into the creation of brand meaning in the context of SEOs that are highly relevant for brand management research and practice. From the academic perspective, we contribute to answering the call for studies on how brand management differs between purely commercial businesses and not-for-profit organizations (Golob et al. 2020). The innovative SIBM that we propose, based on our explorative study, contributes to the social literature research by presenting a holistic brand management approach tailored to the specific challenges that SEOs face. By focusing on the antecedents of creating brand meaning, it goes beyond existing models that deal mainly with measuring non-profit brand equity (Naidoo and Abratt 2018). At the same time, our approach goes hand-in-hand with existing research (Laidler-Kylander and Simonin 2009) that identifies consistency, focus, trust, and partnerships as key variables of brand equity. In this context, the importance of internal branding and value of internal brand ambassadors are highlighted, ensuring that the brand's internal and external perceptions align (Laidler-Kylander and Simonin 2009). Our model also identifies these aspects as one of the main drivers in creating brand value in SEOs. Concerning the "human brand interfaces," the social entrepreneur, whose relevance to an organization's reputation is supported by Waldner (2020), acts as a representative of the brand and influences its meaning with its personal characteristics. Therefore, personal branding has a special role here. Personal branding can be seen as a construction of a human brand "that can then be marketed as effectively as possible" (Shepherd 2005, p. 6) and that is highly influenced by the constructs of credibility and authenticity (Scheidt et al. 2020). This can be transferred to the social entrepreneur, who should be perceived as someone who cares about social impact but can also think and act commercially. This study also provides empirical evidence regarding brand value co-creation processes by giving first insights into how SEOs assert more control over stakeholders' dialogues in social media while simultaneously ensuring their brands align with the values derived from their mission statements. Thereby, for SEOs, the definition of a mission statement is crucial since it represents a clear idea of what an organization wants to be and positively impacts economic performance (Berbegal-Mirabent et al. 2021).
In addition to the theoretical implications, this study gives recommendations for managers of SEOs. For founders and marketing managers of SEOs, our findings help them to fundamentally understand which drivers influence the development of a strong and authentic brand. One crucial point is that they must understand that to focus on the social mission and make it credible they must have the entrepreneurial skills to achieve long-term social goals. SEOs must develop the ability to adequately address and manage the varying expectations of various stakeholders in digital and personal conversations, and ensure connectivity at all touchpoints. In these contexts, they should credibly embody the social mission but also demonstrate business skills. The union of these two paradoxical identities should be considered when developing the brand identity. In all activities, brand managers in SEOs should keep in mind that, in addition to marketing tools such as brochures or websites, it is above all personal contacts and social processes that contribute to developing a strong SEO brand. Therefore, internal branding also plays an essential role because it enables the SEO's representatives to communicate the organization's values to the outside world in exchange processes with stakeholders and enter into negotiation processes about the brand's meaning. However, SEO's management must also accept that they can only control their brand's meaning to a limited extent: in our increasingly connected world, stakeholders actively co-create the development of brand meaning. Nevertheless, our study's results provide a tool to SEO's management for moderating the co-creative brand management process.
Limitations and further research
The creation of the SIBM has been the result of research based on in-depth interviews with social entrepreneurs, managers, and social marketing experts. Therefore, qualitative research's general limitations-such as the lack of representability or possible interviewer bias-must be considered. Although our introduced brand model paves the way for understanding the creation of brand meaning in social entrepreneurship, it suffers the limitation of not having included the opinions of some important stakeholders e.g., consumers, beneficiaries, and sponsors. Due to the qualitative nature of our research, the conclusions are not generalizable. Owing to the qualitative nature of the research, the SIBM focuses on better understanding the process of brand meaning co-creation in SEOs, but without proposing testable hypotheses. Although this study provides initial insights into how brand meaning is co-created in SEOs, further research is needed that will illuminate stakeholder groups' influence in the context of brand meaning creation. Furthermore, in addition to the process of building a strong brand in SEOs, great potential still exists for work that further explores the systematics of brand monitoring process. SEO brands are a growing managerial reality, but scientific research in this field remains in its infancy. Therefore, empirical research-both qualitative and quantitative-is needed to provide other relevant explanatory approaches to the branding of SEOs. Based on our study results, we see a need for research in the following four areas. First, the presented SIBM is based on existing literature and qualitative research data. It is thus the first attempt to provide an explanatory approach to guide brand meaning management in SEOs. In the next step, the model should be tested, for example, within a case study. Second, the transferability of the model to all SEOs forms should be verified. Here, one object of research could be the role of the social entrepreneur. Does his or her influence on the brand meaning apply for all SEOs, or is this possibly dependent on certain factors such as organizational size, industry or dependence on social media? Due to the importance of a founder's personal brand, the question follows for further studies on how branding can positively influence effective succession management. Here, studies would be helpful that focus on the depersonalization of the founder's brand and derive strategies on how the DNA shaped by the founder can be perpetuated in the organization via branding, even once the founder leaves the company.
Third, further empirical research should be conducted to specify further how the stakeholder interaction process influences brand meaning in this specific context. In this context, further research could be conducted into the extent of the impact on brand meaning creation for different stakeholder groups. Finally, the process of monitoring brand meaning should be further empirically researched, specified, and verified.
In conclusion, our research provides a manageable explanatory approach for how SEOs can create brands with authentic and stable brand meanings while managing stakeholder groups' varying expectations. By aligning their brand management activities with the social impact brand model, SEOs can be much more than "do-gooders": They can transform to strong and sustainable brands that make a real-world difference in this world.
Funding Open Access funding enabled and organized by Projekt DEAL. Jörg Henseler gratefully acknowledges financial support from FCT Fundação para a Ciência e a Tecnologia (Portugal), national funding through a research grant from the Information Management Research Center-MagIC/NOVA IMS (UIDB/04152/2020).
Conflict of interest
On behalf of all authors, the corresponding author states that there are no conflicts of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2022-12-12T16:08:34.902Z | 2022-12-10T00:00:00.000 | {
"year": 2022,
"sha1": "069f0c43f82aff929d958a098242251c0d7179d0",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1057/s41262-022-00299-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "678f5c8695235715d387ad7802ac2053578b2694",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
12209545 | pes2o/s2orc | v3-fos-license | Geographical distribution and molecular detection of Nosema ceranae from indigenous honey bees of Saudi Arabia
The aim of the study was to detect the infection level of honey bees with Nosema apis and/or Nosema ceranae using microscopic and molecular analysis from indigenous honeybee race of eight Saudi Arabian geographical regions. A detailed survey was conducted and fifty apiaries were chosen at random from these locations. Infection level was determined both by microscope and Multiplex-PCR and data were analyzed using bioinformatics tools and phylogenetic analysis. Result showed that N. ceranae was the only species infecting indigenous honeybee colonies in Saudi Arabia. As determined by microscope, Nosema spores were found to be in 20.59% of total samples colonies, while 58% of the samples evaluated by PCR were found to be positive for N. ceranae, with the highest prevalence in Al-Bahah, a tropical wet and dry climatic region, whereas low prevalence was found in the regions with hot arid climate. Honeybees from all eight locations surveyed were positive for N. ceranae. This is the first report about the N. ceranae detection, contamination level and distribution pattern in Saudi Arabia.
Introduction
Western honey bees (Apis mellifera) are a highly valued resource worldwide and are of great relevance for humans and the entire ecosystem, not only as a honey and wax producer but also as a pollinator of agricultural and horticultural crops and wild flora (vanEngelsdorp and Meixner, 2010). The total annual global economic worth of pollination amounts to 212 billion USD, representing 9.5% of the value of the global agricultural production (Gallai et al., 2009). However, very unfortunately, honeybee is facing enormous threat worldwide (USA, Europe, Middle East) (Crailsheim et al., 2009;Haddad et al., 2009;Soroker et al., 2009) including Saudi Arabia (Alattal and AlGhamdi, 2015).
Beekeeping is one of the long-standing practices in rural Saudi Arabia and is one of the most important economic activities for the communities (Al-Ghamdi and Nuru, 2013). Approximately 5000 beekeepers maintain more than one million honeybee colo-nies and produce approximately 9000 metric tons of honey annually (Al-Ghamdi, 2007). Apis mellifera jemenitica Ruttner (= yemenitica auctorum: vide Engel, 1999), the smallest race of A. mellifera, is the only race of A. mellifera naturally found in the country and has been used in apiculture at least 2000 BC. Traditional beekeeping is mostly practiced using this race, because it is well adapted to the semi-arid to semi-desert conditions of Saudi Arabia (Alqarni et al., 2011;Al-Ghamdi and Nuru, 2013). Also, honey produced by this native bee (A. m. jemenitica) is sold at 10-20 times high rates than imported honeys (Al-Ghamdi personal Comm.) Despite the great potential and multiple opportunities for beekeeping in Saudi Arabia, the bee-keeping industry is steadily growing in the country with different opportunities and, of course, many challenge. The major challenge is occurrence and distribution of honeybee disease in the country (Al-Ghamdi, 1990Alattal and AlGhamdi, 2015;Ansari et al., 2016Ansari et al., , 2017. A mysterious decline in honeybee colonies has gained worldwide attention, including in Saudi Arabia. In the last decades, significant losses have been observed in indigenous honeybee colonies in Saudi Arabia (Alattal and AlGhamdi, 2015). Much attention has been given to Colony Collapse Disorder (CCD), which is a syndrome specifically defined as a dead colony with no adult bees and with no dead bee bodies but with a live queen, and usually honey and immature bees, still presents . Several causes of these large-scale losses have been reported, including honey bee parasites (Varroa destructor, Acarapis woodi); pathogens (Nosema spp. and bee viruses); pesticides, harsh environment, use of antibiotics, poor nutrition, and migratory beekeeping practices (Kevan et al., 2007;Higes et al., 2008;Naug, 2009;Bacandritsos et al., 2010;vanEngelsdorp and Meixner, 2010;Alattal and AlGhamdi, 2015).
Nosemosis is a fungal infection of honey bees caused by either Nosema apis or N. ceranae. N. apis was the historic species infecting European honey bees (Matheson, 1996). N. ceranae was previously isolated in naturally infected Apis cerana worker bees in China (Fries et al., 1996) and later has been described infecting Apis mellifera in Europe (Higes et al., 2006). Currently, this parasite is widespread all over the world and has shown the capacity of infection in other Hymenoptera different from the honeybees (Plischuk et al., 2009), and it is now a common infection of European honeybees and is highly virulent to its new host (Chen et al., 2009a). This is problematic for beekeepers because N. ceranae has a different seasonal phenology than N. apis, causing more significant problems for beekeepers in summer months and in warm climates (Bourgeois et al., 2010). Both Nosema spp. can co-infect honey bees (Chen et al., 2009b;Paxton et al., 2007;Forsgren and Fries, 2010). Although co-infections occur, N. ceranae has become the predominant species in many regions (Chen et al., 2009a;Klee et al., 2007;Williams et al., 2008;Razmaraii et al., 2013;Haddad, 2014).
Routine optical microscopy assessment can confirm infection with Nosema species, but it is impossible to distinguish between the species because the spores of the two Nosema species are very similar and can hardly be distinguished by light microscopy, so that in the absence of clear morphological characteristics for species recognition, other techniques using molecular markers may greatly assist in the diagnosis and identification of honeybee microsporidians Thus, it is necessary to use molecular diagnostic tools and identification methods (Gajger et al., 2010). The PCR technique provides a very sensitive test for detecting microsporidian infection because it enables detection of the parasite even at very low levels of infection.
In some bordered countries of Saudi Arabia (Egypt, Israel, Jordan, Iraq and Iran), Nosema infection in honeybee colonies has been reported previously (Alzubaidy and Ali, 1994;El-Shemy et al., 2012;Nabian et al., 2011;Razmaraii et al., 2013;Aroee et al., 2016;Soroker et al., 2009). However, even though some preliminary studies have been conducted on Nosemosis in honeybee in Saudi Arabia (Al-Ghamdi, 1990;Alattal and AlGhamdi, 2015;Abdel-Baki et al., 2016). Recently, in Riyadh region of Saudi Arabia, Nosemosis has been recognized by the presence of Nosema spores through light microscopy, assuming N. apis to be the causal agent (Abdel-Baki et al., 2016). These findings have led to a demand for PCR based research that determine, which species of Nosema have been present in A. m. jemenitica, the indigenous honeybee race of Saudi Arabia in a recent past using detailed survey and molecular characterization.
Sampling
A total of fifty seemingly healthy apiaries of A. m. jemenitica, indigenous race of Saudi Arabia, owned by different beekeepers were randomly selected following stratified randomization procedures (Moher et al., 2010). Samples were collected from local (A. m. jemenitica) bee races only (10 hives from each apiary) from March to April (major nectar flow period in Saudi Arabia) during the year 2015. The samples were collected from eight major beekeeping regions of Saudi Arabia, based on, the beekeeping management schedule and the categorization of geographical regions, For instance, Riyadh, Al-Qassim, Al-Ahsa, Taif and Jazan (Hot arid climate region), Al-Madinah and Al-Bahah (Tropical wet and dry climate), and Abha (Cold semi-arid climate). The sampling hives had not been treated against Nosema disease for at least 6 months. In each hive, approximately 100 worker bees were collected from outer honey frames of the brood chamber, placed in falcon tube containing preservative buffer (RNA Later Ò ), transported to the laboratory of the Bee Research Unit (BRU) at the Department of Plant Protection of the Faculty of Food and Agriculture Sciences at King Saud University and stored at À20°C until analyzed.
Data collection
The data collected from the survey included the following information for each inspected apiary: the date of inspection, the apiary location (to facilitate repeat visits), the name of the owner, the hive type (local or modern), the honeybee race (indigenous or imported), the number of honeybee colonies and the colonies having some unusual symptoms.
Microscopic examination of spores
Samples were initially analyzed by phase contrast microscopy for presence or apparent absence of Nosema spp. spores. For each sample, the abdomens of 30 adult bees were macerated in 30 mLof ddH2O, the suspension was filtered and centrifuged for 5000 rpm for 10 min and the homogenate examined under a Phase Contrast Microscope (Olympus BX51, model BX51TF, Japan, equipped with an Olympus DP71 camera (Olympus, Japan) at 400 Â magnification (Fig. 2B), and photographed (OIE, 2008). Measurements are presented in micrometers and data are expressed as the mean followed by the range in parentheses. As morphological characteristics of N. ceranae and N. apis spores are similar and can hardly be distinguished by optical microscopy, all samples were also screened by multiplex 96 polymerase chain reaction (M-PCR) assay based on 16S rRNA-gene-targeted species specific primers to distinguish between N. ceranae and N. apis. Positive samples were also used for further molecular diagnosis as discussed below.
16S rRNA gene sequencing and phylogenetic analysis
PCR products were purified using GenElute PCR Clean-Up Kit (Sigma-Aldrich, India) and send to BGI Genomics Co., Ltd (Hong Kong, China) for both end sanger sequencing. The sequences obtained were manually edited using Sequencher 4.5 (Gene Codes Corp.) and were aligned using the Bioedit sequence editor software version 7.0.5.3. These sequences have been submitted to GenBank (accession number KY022481 and KY022482 for ksuNC4 and ksuNC6 respectively). Partial 16S rRNA gene sequences of the isolates were compared with 16S rRNA gene sequences available by the BLAST search (Altschul et al., 1990), in the National Centre for Biotechnology Information (NCBI) database (http://www.ncbi. nlm.nih.gov/). Multiple sequence alignments were performed using CLUSTALW version 1.8 (Thompson et al., 1994). Phylogenetic tree was constructed by the neighbor-joining method (Saitou and Nei, 1987), and the reliability of the tree topology was evaluated by bootstrap analysis using MEGA 6.06 software (Tamura et al., 2013).
Data analysis
The corresponding 95% confidence intervals (95% CI) were calculated and differences among prevalence values were compared by Fisher's exact test. P values <0.05 were considered significant. The Cohen's Kappa coefficient was used as a measure of agreement between microscopy and M-PCR. The following ranges were considered for interpretation of the Cohen's Kappa coefficient: poor agreement = less than 0.00, slight agreement = 0.00-0.20, fair agreement = 0.21-040, moderate agreement = 0.41-0.60, substantial agreement = 0.61-0.80, and almost perfect agreement = 0.80-1.00.
Statistical analysis
The prevalence of Nosema spp. contamination levels in honeybees from different geographical regions of Saudi Arabia was calculated by descriptive statistics and a confidence interval (CI) of 95%.
Field survey
During survey presence of disease agents and parasite was monitored and results showed that Varroa mite is frequently infesting honeybee colonies in all the sampled locations. Presence of Nosema was recorded in all locations (Table 1). Majority of infection have been reported in Al-Bahah, Abha and Taif region of Saudi Arabia. These are the three major beekeeping regions in Saudi Arabia and most of the honey has been produced by these three areas. In comparison to Riyadh, Al-Qassim and Al-Ahsa, these three regions have high rainfall and moderate temperature (Table 1).
Microscopic examination
Spores of Nosema spp. were detected at 400x magnification under Phase contrast microscope (Fig. 1B). Phase contrast microscopy of the midgut content revealed the presence of large numbers of Nosema spp. spores. Nosema spores were oval shaped, measuring 3.0-4.0 lm in width and 5.0-7.0 lm in length (n = 30) (Fig. 2). Nosema infection has been found in all locations of sample collections. As determined by microscope, Nosema spp. were found in 20.59% of total sampled colonies. In hot arid climatic eco-regions like Riyadh (20%), Al-Qassim (18.36%), Al-Ahsa (15%), Taif (19%) and Jazan (12.06%) of samples were found microscopically positive. Conversely, in Tropical savanna climate region like Al-Madinah (26%), Al-Bahah (31%) and Cold semi-arid climatic region, Abha (27.14%) infection was more than hot arid climatic sampling locations. A significant difference in the infection level was found in hot arid climatic eco-regions and Tropical savanna climate region together with cold semi-arid climatic region (P < 0.001) ( Table 1).
Molecular characterization of Nosema spp
There is no previous record of presence of Nosema spp. using molecular identification in Saudi Arabia. Therefore, we initiate to confirm the Nosema spp. identification by M-PCR and DNA sequencing by collecting samples from geographically distinct locations throughout Saudi Arabia. Based on the PCR analysis, Nosema infections were detected in apiaries from all the eight beekeeping regions examined. Detection by PCR using N. apis and N. ceranae specific primers found that 29 out of 50 apiaries (58%) were positive for N. ceranae (Table 1). But no apiaries were found to be positive for N. apis (Fig. 3). Infection was found in the samples collected from Tropical savanna climatic regions and cold semiarid climatic region in comparison to hot arid climatic region. Overall, a total of 29 out of 50 (29%, 95% CI: 41.96-58.04%) apiaries tested positive for Nosema infection by microscopy and M-PCR. Therefore, the Cohen's Kappa coefficient for the association between results of microscopy and results of M-PCR was 1, indicating that there was a perfect level of agreement between the two diagnostic methods in all the bee samples. In hot arid climatic eco-regions, 31.6%, 32.5%, 30%, 22% and 15.51% from Riyadh, Al-Qassim, Al-Ahsa, Taif, and Jazan respectively were found to be PCR positive. Conversely, in Tropical savanna climate region, 34% (Al-Madinah), 39% (Al-Bahah) and Cold semi-arid climatic region 35.71% (Abha) were PCR positive (Table 1).
BLAST and phylogenetic analysis
Similar to that observed in other countries (Medici et al., 2012), we observed some intraspecific variations in the 16S SSU of N. ceranae in Saudi Arabia. Comparative analysis of 16S rRNA gene sequences of ksuNC4 and ksuNC6 isolates showed 217/218 (99%) sequence identity. Over the entire sequence range analyzed (218 bp), only one position (18th bp) was polymorphic with one gap. The BLAST search of these sequences against GenBank Nucleotide database, the highest similarity (99%) was found with N. ceranae 16S rRNA suquences. A Nucleotide BLAST search showed that the DNA sequence obtained from the ksuNC4 (218 bp) isolate showed a 100% sequence identity with the 16S rDNA of some N. ceranae isolates (gb| KC680636.1, gb| KC680629.1 and gb| DQ329034.1). The first two close hit of ksuNC4 closely related to the DNA sequence of N. ceranae isolated from honeybee samples from Lebanon (Roudel et al., 2013), and the third close hit belongs to the sequence of N. ceranae isolated in Spain . Similarly, the DNA sequence obtained from the ksuNC6 isolate (417 bp) showed a 100% sequence identity with the 16S rDNA of other N. ceranae isolates (gb| KC680641.1, gb| KC680642.1 gb| KC680637.1 and gb| JF431546.1). First two closely related hits of ksuNC6 belongs to N. ceranae isolates from Moroccon Honeybee, third one from Lebanon (Roudel et al., 2013), and the forth close hit belongs to the sequence of N. ceranae isolated in Iran (Nabian et al., 2011). This indicated that Lebanon (northern bordered country of Saud Arabia) and Iran (eastern bordered country) the close neighbors of Saudi Arabia have same type of genotype as found in Saudi Arabia.
The evolutionary relationship between the two isolates and previously reported isolates were constructed using MEGA6.06 software (Tamura et al., 2013). The results illustrate the degree of evolutionary relatedness between the two Saudi Arabian isolates and other previously reported isolates (Fig. 4). From our study, the ksuNC4 isolate showed greater relatedness to three previously reported N. ceranae isolates {gb| KC680629.1 (Lebanon), gb| KC680636.1 (Lebanon) and gb| DQ329234.1 (Spain)}, while isolate ksuNC6 formed a separate clade by itself with gb| KC680637.1 (Lebanon), gb| JF431546.1 (Iran) and gb| KC680642.1 (Morocco). This indicates that the genotypes of the two isolates differ. The ksuNC4 genotype was isolated from the apiary located in Al-Bahah and the isolate ksuNC4 was isolated from Jazan, both are two different geographical regions of Saudi Arabia.
Discussion
During the last decade, an increase infection by Nosema parasite in the honeybee (Emsen et al., 2016) followed by increased numbers of honeybee colony death and decreased honey production has been reported worldwide. One of the main reasons that might explain these problems is the recent introduction of N. ceranae in honeybee in Europe (Higes et al., 2006) and appeared to be highly virulent (Paxton et al., 2007;Higes et al., 2007). Environmental conditions also strongly influence many parasitic relationships and, regardless of the effects of altitude, flora and colony management, in warm countries like Spain the influence of temperature on the consequences of N. ceranae has been observed (Martín-Hernández et al., 2012).
The prevalence of nosemosis disease has been proven to vary among regions and years (Mulholland et al., 2012). Although, N. apis has a world-wide distribution, it is not considered an important problem in tropical and sub-tropical regions (Wilson and Nunamaker, 1983). However, in temperate regions N. apis infections typically peak in the spring, decrease during the summer and then increase again in the fall before declining during the early winter months (Pickard and El-Shemy, 1989). N. ceranae was the most prevalent microsporidia found in A. mellifera in hotter regions (Mediterranean regions) and is reduced in colder climate (Fries, 2010;Gisder et al., 2010). Since N. ceranae infection appears to be more common in warmer climates and in specific geographical areas as N. ceranae spores are capable of surviving high temperature (60°C) (Fenoy et al., 2009), this should be considered when importing bees from such areas (Fries, 2010).
In Sweden, 83% of colonies had N. apis only and 17% had both N. apis and N. ceranae (Fries, 2010). In Scotland, 70.4% of colonies revealed the presence of both N. ceranae and N. apis (Bollan et al., 2013). In east Azerbaijan province of Iran, an eastern bordered country of Saudi Arabia, 67.1% of colonies revealed the presence of N. ceranae (Razmaraii et al., 2013). Nabian et al. (2011) and Aroee et al. (2016), also reported the presence of N. ceranae in Iran. In Jordan, situated on northern border of Saudi Arabia, 23.9% of colonies found to be infected by N. ceranae and N. apis infection was not detected by PCR (Haddad, 2014). More evidences are available for prevalence of Nosema infection in some more cross bordered countries of Saudi Arabia, For instance, Iraq (Alzubaidy and Ali, 1994), Egypt (El-Shemy et al., 2012); Kuwait (OIE, 2004) and Oman (Matheson, 1993). Symptoms of nosemosis have been reported before among honey bees in Saudi Arabia (Al-Ghamdi, 1990;Matheson, 1993;Alattal and AlGhamdi, 2015). Recently, Abdel-Baki et al. (2016), observed the presence of Nosema spores using optical microscope from Honeybee in Riyadh region of Saudi Arabia.
The identification of Nosema in the Eight provinces surveyed in Saudi Arabia was expected, given that this Nosema species symp- toms and microscopic examinations of spores has previously been reported in Saudi Arabia and cross border countries prevalence of Nosema infection. N. ceranae is recent fungal pathogen for Saudi Arabian honeybees as reported in this study. since Al- Ghamdi, 1990 andAbdel-Baki et al., 2016, suspects the symptoms of N. apis in honeybee of Saudi Arabia by morphological symptoms and microscopy. In this study, the high prevalence (58%) of N. ceranae together with the absence of N. apis infection in the present survey corroborates the findings of other authors that N. ceranae is definitely spread in Saudi Arabia and has basically replaced N. apis . High thermotolerance at 60°C and 35°C, resistance to desiccation, significant decrease in viability after freezing, and rapid degeneration of N. ceranae spores maintained at 4°C were observed under experimental conditions (Fenoy et al., 2009). Therefore, it has been proposed that N. ceranae may be more prevalent in warmer climates (Fries, 2010) such as the typical middle east countries climate that occurs in the different study area. The present prevalence is close to values as high as 67% observed in Iran (Razmaraii et al., 2013) but considerably higher than, 23.9% and 49.2% as previously reported in Jordan (Haddad, 2014) and Thailand (Chupia et al., 2016), respectively. Different prevalence values reported in the literature may be due to differences in the number of apiaries examined, sampling methods, geographical areas, characteristics of honeybee population, diagnostic techniques, and other biotic and abiotic factors. Based our findings, microscopy is still a valuable, relatively cheap and simple method to screen for the presence of Nosema infection in apiaries since a perfect agreement between microscopy and M-PCR was observed in the experiments. Unfortunately, very strong morphological similarities occur between N. apis and N. ceranae spores, resulting in a high risk of misdiagnosis. Both Nosema species spores are smooth and darkly outlined with elongated-elliptical shape and bright centre. N. apis spores end rounded and measure 6.0 lm in length and 3.0 lm in width. N. ceranae spores end sharper and measure 4.4 lm in length and 2.2 lm in width (Huang, 2012;Michalczyk et al., 2011). The main differences are noted with respect to the length of the polar filament, and they can be detected only under an electron microscope (Fries, 2010;Paxton, 2010).
In this scenario, molecular techniques such as M-PCR are needed for a reliable identification of Nosema to species level (Michalczyk et al., 2011). Indeed, the advent of new highly sensitive and specific molecular tools has played a key role for detection of N. ceranae in A. mellifera and for retrospective analyze of samples, showing that N. ceranae is not a new microsporidian agent in A. mellifera but it has infected this host during the last two decades (Guerrero-Molina et al., 2016). It is likely that the delay in a correct identification of N. ceranae in A. mellifera is attributable to the routine use of microscopy as a diagnostic technique for the identification of Nosema spores (Higes et al., 2010). Therefore, accurate identification of Nosema spores to species level by molecular tools should be especially useful for Saudi Arabian beekeepers.
A Nucleotide BLAST search showed that the DNA sequence obtained from the ksuNC4 (218 bp) isolate showed a 100% sequence identity with the with the other N. ceranae isolates from Lebanon (Roudel et al., 2013), whereas, ksuNC6 (217 bp) isolate showed a 100% sequence identity with N. ceranae isolates from Morocco and Lebanon. This indicated that Lebanon and Morocco, which are close neighbors of Saudi Arabia, have the same type of genotype as found in Saudi Arabia. It is unlikely that the primary infestation of N. ceranae in Saudi Arabia is due to human facilitated transportation of bee packages or bee products between both countries, but it seems that a third party is the main source of this type of infestation in both countries, which could be Egypt, where bee exportation is common to most of the African and middle east countries (Alattal and AlGhamdi, 2015).
Nosema spores are primarily spread to neighboring bees through fecal matter contaminating the environment (fecal-oral pathway) or, alternatively, they can also reach the crop and be regurgitated to other colony members during food exchange (oral-oral pathway) (Smith, 2012). Therefore, infections by both Nosema species can be transmitted among bees via ingestion of environmentally resistant mature spores from contaminated wax, combs, other hive interior surfaces, and water (OIE, 2013). Other potential routes of transmission include contamination of pollen, beekeeping material, and honey as well as cleaning activities and trophallaxis (Higes et al., 2010). Auto-infections can also occur . In our survey, we did not attempt to identify any source of infection. However, all these routes of spread and transmission of infective spores may have played a role in the presence of N. ceranae in the A. mellifera colonies investigated.
In contrast to nosemiasis caused by N. apis, N. ceranae affected bees that do not exhibit defecation near or inside the hive with evident dysentery but the main clinical symptom is dwindling, i.e. the progressive reduction in the number of bees in a colony with no apparent cause, until the point of collapse (Huang, 2012;Paxton, 2010). Sometimes dwindling may affect the whole apiary and other times only specific colonies may show symptoms. The disease sometimes occurs rapidly but may also occur over several months .
In a recent study in Argentina, an increasing gradient of infection and counts of Nosema spp. were observed from warmer to colder regions (Pacini et al., 2016). In accordance to these results of Pacini et al., 2016, in our study, maximum infection found in Al-Bahah (Tropical savanna (Aw) climate), Abha (Cold semi-arid (BSk) climate) and Al-Madinah (Tropical savanna (Aw) climate), whereas comparatively little infection has been observed in the samples collected from the apiaries situated in hot arid (BWh) climate (Riyadh, Al-Qassim, Al-Ahsa, Taif and Jazan) ( Table 1). This uneven distribution pattern of Nosema spp. and infection level, may be caused by a diversity of yet unknown factors that have to be identified in future investigations as certain environmental conditions, beekeepers management practices and genetic background of honey bees that influences Nosema distribution. Alternatively, an ongoing displacement process of N. apis by N. ceranae may also result in the observed pattern yet such a chronological process can only be confirmed in a long-term study.
Our results show that N. ceranae is the only Nosema spp. found to infect honey bees in the different geographical regions of Saudi Arabia. None of the samples was infected with N. apis. From the available literature, we can understand that N. apis has been present in some Middle east countries (Al-Ghamdi, 1990;Alzubaidy and Ali, 1994;OIE, 2004;Matheson, 1993) as well as in the Europe and America (Paxton et al., 2007;Chen et al., 2009b) for the past decade. Recently N. ceranae emerging as new microspordian infection in middle east and north African countries as already reported in Europe, USA, Canada, and China (El-Shemy et al., 2012;Roudel et al., 2013;Haddad, 2014;Aroee et al., 2016;Williams et al., 2008;Higes et al., 2006), This indicates that N. ceranae is a new emerging pathogen for honey bees, and has presumably been transferred from its original host A. ceranae to A. mellifera much earlier than previously recognized (Guerrero-Molina et al., 2016). The evidence, when N. ceranae appeared and started to parasitize Saudi Arabian bees is unknown, and it is difficult to investigate the past incidence because of a lack of bee samples. However total absence of N. apis might be due to better adaptation of N. ceranae to warm climate of Saudi Arabia (Forsgren and Fries, 2013;Martín-Hernández et al., 2012). This is the first report of molecular detection of N. ceranae in Saudi Arabia. Further research and analysis of more colonies are needed to determine the actual prevalence of this new agent in the country. Intensive survey and further research are thus necessary to determine the distribution and prevalence of Nosema spp. in the Kingdom of Saudi Arabia and their Preventive measure. This report is an alarm for beekeeping industry of Saudi Arabia and protection from honeybee pathogens. Beekeepers must pay attention when moving their colonies in different season to void the pathogens including Nosema.
Conclusions
Overall, our results provide evidence that N. ceranae infection occurs frequently in the cohort of apiaries examined despite the lack of clinical signs. This suggests that colony disease outbreaks might probably be caused by other factors, both known and unknown, that singly or in combination may lead to higher susceptibility of honeybees to N. ceranae. The results confirmed the colonization of N. ceranae in Saudi Arabia and need further molecular study at a more extensive monitoring level in order to elucidate possible links between infection by N. ceranae and colony losses in Saudi Arabia. | 2018-04-03T00:28:20.112Z | 2017-02-01T00:00:00.000 | {
"year": 2017,
"sha1": "192c71a8d4b7b7dbfe5361eb15d8ab470ce21cce",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.sjbs.2017.01.054",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "54ac46d8fb2ce5c5556c687af25055d9710bfcb4",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
245162322 | pes2o/s2orc | v3-fos-license | Hierarchical Ridge Regression for Incorporating Prior Information in Genomic Studies
There is a great deal of prior knowledge about gene function and regulation in the form of annotations or prior results that, if directly integrated into individual prognostic or diagnostic studies, could improve predictive performance. For example, in a study to develop a predictive model for cancer survival based on gene expression, effect sizes from previous studies or the grouping of genes based on pathways constitute such prior knowledge. However, this external information is typically only used post-analysis to aid in the interpretation of any findings. We propose a new hierarchical two-level ridge regression model that can integrate external information in the form of “meta features” to predict an outcome. We show that the model can be fit efficiently using cyclic coordinate descent by recasting the problem as a single-level regression model. In a simulation-based evaluation we show that the proposed method outperforms standard ridge regression and competing methods that integrate prior information, in terms of prediction performance when the meta features are informative on the mean of the features, and that there is no loss in performance when the meta features are uninformative. We demonstrate our approach with applications to the prediction of chronological age based on methylation features and breast cancer mortality based on gene expression features.
Introduction
In genomic studies, there is often a great deal of prior knowledge about the genomic features that are being modeled. These "meta features" (or features-of-features) may be comprised Open access article under the CC BY license. of gene annotations (e.g., an indicator to denote whether a gene belongs to a particular pathway), natural groupings of the genomic features (e.g., methylation probes mapping to genes), or information from previous studies (e.g., scores or effect estimates of a SNP on the outcome) that the researcher considers relevant to the outcome of interest. For example, the Molecular Taxonomy of Breast Cancer International Consortium (METABRIC) study includes cDNA microarray profiling of close to two thousand breast cancer patients and patients' survival information within the study follow-up (Curtis et al., 2012). In this example, which we later use to illustrate our approach, we are interested in predicting patient mortality based on their gene expression profiles. As potentially informative meta features we consider the attractor metagenes identified by Cheng et al. (2013). These are groups of genes that capture molecular events known to be associated with clinical outcomes in many cancers. We expect improved prediction performance when incorporating these metagenes into the model building process.
Genomic data are often high-dimensional i.e., has more features per observation than observations in the study. But classical regression methods such as linear and logistic regression breakdown in high-dimensional settings. High-dimensional regression methods require regularization, a technique that modifies the loss function by adding a penalty term that shrinks the regression coefficients toward zero. Among the best known examples of regularized/penalized regression are ridge regression (Hoerl and Kennard, 1976), LASSO (Tibshirani, 1996), and elastic net (Zou and Hastie, 2005), though many other approaches have been developed to encourage additional structure or desirable properties of the regression estimates (e.g., Fan and Li, 2001;Yuan and Lin, 2006;Zou, 2006;Zhang, 2010;Dai et al., 2018). The amount of shrinkage induced by the penalty dictates the balance between model complexity (bias) and model stability (variance). It is controlled by a penalty parameter that requires tuning, which is typically accomplished via cross-validation.
While most regularization methods penalize all regression coefficients equally, featurespecific weighting can be performed to allow for differential shrinkage. In particular, several approaches have been recently proposed to improve the prediction performance of regularized regression models through the integration of prior information. Using the LASSO (Tibshirani, 1996) framework, Bergersen et al. (2011) incorporates relevant meta features by developing feature-specific penalties. This modification provided more stable model selection and improved prediction over the standard LASSO. Similarly, Van De Wiel et al. (2016) proposed an adaptive group-regularized version of ridge (Hoerl and Kennard, 1976) regression which derives empirical Bayes estimates for group-specific penalties by utilizing meta features such as gene annotations or external p-values. Recently, Tay et al. (2021) proposed the feature-weighted elastic net that uses meta features to adapt the featurespecific penalties for elastic net (Zou and Hastie, 2005) regularization and Zeng et al. (2020) proposed an alternative approach that models the magnitude of the subject-specific tuning parameters as a log-linear function of the meta features.
Some of these approaches fix the weights in advance (e.g. Bergersen et al., 2011), which requires unavailable knowledge about the relative importance of the features. Others, adaptively (re)-estimate these weights (see e.g., Van De Wiel et al., 2016;Tay et al., which in turn limits the number of meta features that can be integrated at any given time. In addition, by modifying the penalties, these methods assume that the meta features are explaining variations in the features. Instead of using the meta features to determine weights, we propose a hierarchical ℓ 2 -regularized (two-level ridge regression) model that jointly models the subject-level features and meta features, which enables the integration of any type and number of meta features. At the first level, the outcome is regressed on the subject-level features, as in standard regularization methods. Rather than assuming the meta features affect the variance of the subject-level features, the second level models the effect of the meta features on the mean of the subject-level features. L 2 -regularization is applied to the subject-level features and the meta features as both sets (features and meta features) have the potential to be highly correlated and high dimensional. We show that the two-level ridge regression model can be rewritten as a single ridge regression with a modified design matrix and parameter vector, which allows us to use efficient optimization techniques to estimate the model parameters. We also derive closed-form solutions under specific scenarios that sheds light on how the external information impacts estimation of the first-level regression coefficients.
The rest of the paper is organized as follows. The two-level ridge regression model is described in Section 2. In Section 3, we provide a simulation study that compares our proposed method to competing methods. Real data applications for predicting chronological age and breast cancer mortality are given in Section 4. Discussions of our findings and parting comments are provided in Section 5. The two-level ridge regression model is implemented in the R package xrnet Lewinger, 2019, 2021), which can be found at https://CRAN.R-project.org/package=xrnet.
Setup
Consider the linear regression model where y ∈ ℝ n is a vector of quantitative measurements collected on n subjects, X = x 1 T , …, x n T is an n × p matrix of genomic features (e.g., expression levels, genotypes, methylation probes), β = (β 1 , …, β p ) T is the vector of regression coefficients, and ϵ ~ p (0, σ 2 I p ) for some σ 2 > 0. We assume, for notational convenience, that the observations are standardized with sample mean 0 (which removes the intercept term) and sample variance 1. The genomic features are assumed to be high-dimensional, i.e., the number of features p exceeds the sample size n.
We also assume that there is a set of q meta features (e.g., gene annotations, natural groupings, information from previous studies) collected for each of the p features that can be represented as a p × q matrix Z. The number of meta-features can be larger than p and/or n.
The Model
In a high-dimensional setting, unique ordinary least squares estimates for model 1 do not exist. Essentially, the linear regression model with more features than observations is too complex for the amount of data available. As mentioned in the introduction, regularization methods (see e.g., Hoerl and Kennard, 1976;Tibshirani, 1996;Fan and Li, 2001;Zou and Hastie, 2005;Zou, 2006;Zhang, 2010;Dai et al., 2018) address this issue by balancing model complexity/parsimony and goodness of fit. Initially developed for handling multicollineairity, ridge regression (Hoerl and Kennard, 1976) is an effective approach for analyzing high-dimensional data. Ridge regression is the solution to an optimization problem with a modified objective function that adds an ℓ 2 -penalty to the standard squared loss function: where β 2 2 = ∑ j = 1 p β j 2 and λ ⩾ 0. The ℓ 2 penalty encourages shrinkage of the coefficient estimates toward zero and the degree of shrinkage is controlled by the choice of the tuning parameter λ (see Section 2.4). A common approach to tune λ is to select the value that minimizes some criterion (e.g., mean squared error) from a grid of possible values of λ using k-fold cross validation.
To incorporate meta features into high-dimensional linear regression, we propose a two-level ℓ 2 -regularization approach based on minimizing the following objective function arg min β, γ 1 2 y − Xβ 2 2 + λ 1 2 β − Zγ 2 2 + λ 2 2 γ 2 2 , where λ 1 > 0 and λ 2 > 0 are two tuning parameters. The first term in (3) is the standard least squares loss, the second term is a ridge penalty that shrinks the estimates of β toward some feature-specific mean μ = Zγ (rather than 0), and the third term is a standard ridge penalty that shrinks the estimates of γ. Note that unlike standard ridge regression, the value of μ toward which the β are shrunk is not fixed but modeled as a linear function of the meta features Z. This second-level penalty encourages genomic features with similar meta-feature profiles to have more similar coefficient estimates compared to genomic features with dissimilar profiles, effectively "borrowing information" across features. We provide specific examples in Section 2.4. Note also that when γ = 0, (3) reduces to (2), and thus the standard ridge regression is a particular submodel of our hierarchical formulation. Furthermore, the second term can be viewed as a least squares regression of β on Z. In this case, β takes the role of the "outcome". A Bayesian motivation behind this hierarchical formulation is provided in the Online Supplementary Materials. Under the Bayesian framework, it is clear that (3) assumes the meta features affect the mean of the subject-level features. This is in contrast to other approaches that integrate meta features by creating feature-specific penalties which, consequently, assumes that the meta features impact the variance of the subject-level features. Shrinkage of both the subject-level features (to the feature-specific mean μ) and meta features (to 0) is controlled by λ 1 and λ 2 , respectively. Similar to the standard ridge regression, one can use k-fold cross validation to select the optimal pair of values for λ 1 and λ 2 over a two-dimensional grid.
While equation (3) posits a natural hierarchical structure to the model, the objective function can be simplified to a single linear regression model using the following variable substitution, ϕ = β − Zγ. By jointly minimizing over (ϕ, γ), (3) can be rewritten as arg min ϕ, γ 1 2 y − X(ϕ + Zγ) 2 2 + λ 1 2 ϕ 2 2 + λ 2 2 γ 2 2 . (4) The formulation in (4) can be extended to include penalties other than ridge. In fact, commonly-used penalties such as the LASSO or elastic-net could be used for regularization on either (or both) the subject-level or meta feature coefficients. We focus on ℓ 2 regularization on both levels due to its ability to handle highly-correlated features (Zou and Hastie, 2005) and its generally good performance in prediction problems.
In summary, our two-level ridge regression model can be reformulated as a single-level ridge regression, where the first p variables, X, have a specific penalty parameter, λ 1 , and the last q variables, XZ, have a specific penalty parameter λ 2 . It may seem that (5) provides a framework for differential ℓ 2 regularization of multi-omic data (e.g., Gross and Tibshirani, 2015;Chai et al., 2017;Liu et al., 2018). While multi-omic data refers to a collection of multiple subject-level measurements, our hierarchical formulation assumes that we have one set of measurements at the subject level (X) and one set of meta features at the feature level (Z). Since the rows of the XZ matrix are linear combinations of the original features given by the columns of Z, it is never full rank even when p + q < n. Shrinkage is necessary to produce unique estimates, even in the low dimensional case. Furthermore, (5) admits the following closed-form solution, which can be computed using numerical linear algebra. In practice, however, we propose to employ cyclic coordinate descent due to its efficiency in generating entire solution paths across a grid of tuning parameters through the use of warm starts (Friedman et al., 2010) and for its generalizability to other outcome types (see Section 2.5). We outline the cyclic coordinate descent algorithm in the Online Supplementary Materials.
The formulation of (5) allows it to be solved using currently-available software (e.g., glmnet) for fixed values of (λ 1 , λ 2 ). However, an important distinction is that we allow ϕ to be penalized differently than γ. We demonstrate this in our simulation study. The cyclic coordinate descent algorithm simultaneously estimates ϕ and γ. Estimates of β can be obtained by the back transformation β = ϕ + Zγ. Our implementation estimates the model parameters for a two-dimensional grid of penalty tuning parameters (λ 1 , λ 2 ) and performs joint parameter tuning of λ 1 and λ 2 using cross validation.
Behavior of the Two-Level Ridge Regression Model
When the matrix X is of full column rank (i.e. well-conditioned low-dimensional case), we can investigate the relationship between both the ridge and ordinary least squares solutions.
Under an orthonormal design matrix (i.e. X T X = I p ) the ridge estimator has the explicit solution: where β ols are the least squares estimates. Therefore, one can see that for λ → 0, Similar to the closed form solution in (6), under the single-level formulation (5) we can derive closed-form solutions for the parameters estimates, under certain assumptions, that reveals how the external information in Z impacts estimation of the coefficients β. While we let X denote generic genomic features, for concreteness, we present the following examples in terms of gene expression levels.
Case 1: Disjoint Groups (E.g., Gene Expression for Genes in Non-
Overlapping Pathways)-Let X be an n × 4 orthogonal design matrix (i.e., X T X = I 4 ) of gene expression levels. Suppose that the first two genes belong to one specific pathway and the last two genes belong to another pathway, disjoint from the first. Then Z can be expressed as a 4 × 2 matrix of binary indicators: . Thus we see that the subject-level estimates are equal to their standard ridge estimator plus a weighted sum of the estimates in the same pathway.
Case 2: Genes in
Overlapping Pathways-Our previous example assumed that genes belong to two disjoint pathways, which lends itself to a simple interpretation of the estimators. We assume now that X is a n × 3 orthogonal design matrix of gene expression levels and let . Each β j is now a linear combination (i.e., a weighted sum) of all three ridge estimates.
Case 3:
Orthogonal X and Z-While meta features that define feature groupings are common, meta features of interest can also be quantitative (e.g., test statistics or p-values from previous studies). We now only assume that Z is orthogonal to X, but can contain quantitative meta features. A general solution in this case is given by: β = I p + λ 1 2 λ 1 λ 2 + λ 1 + λ 2 ZZ T β ridge estimates via ZZ T . The matrix ZZ T can be thought of as a matrix of pairwise similarities between the features, where similarity is measured by the inner product of the pairwise meta-feature profiles. Thus, information is borrowed across all features proportionally to their similarity.
Extension to GLM outcomes
The two-level ridge regression model can be easily extended to models with non-normal outcomes (e.g., binary, categorical, count). Under the generalized linear model framework, we assume that the observations v i = x i T , y i T , i = 1, …, n, are mutually independent and that, conditional on x i , y i belongs to the exponential family with the following density where ξ is defined as the canonical parameter, ν > 0 is the scale (dispersion) parameter and a(ν), b(ξ), and c(y, ν) are known functions whose values depend on the distribution (Dobson and Barnett, 2018;McCullagh, 2019). Furthermore, under the assumption that a(·) is twice differentiable, (7) indicates that E(y i |x i ) = μ i = a′(ξ i ) and var(y i |x i ) = a″(ξ i )b(ν i ). In addition, the canonical parameter ξ is connected to x i through a prespecified link function ℎ μ i = x i T β for some β = (β 1 , …, β p ) T . The likelihood function for β is defined as and the log-likelihood is defined as l(β) = log L(β; v i ). We estimate the regression coefficients β by minimizing the negative log-likelihood function. The two-level ridge GLM can now be defined as arg min θ − l(θ) + θ T Λθ .
Since l(θ) is convex and the ridge penalty is separable, cyclic coordinate descent can again be used to estimate the parameters in the model (see Online Supplementary Materials). We provide an example of the two-level ridge logistic regression in our numerical studies and a real data application on breast cancer mortality is provided.
Simulation Study
We assess the prediction performance of our proposed two-level ridge estimator to several competing methods: 1) standard ridge regression; 2) "augmented" ridge regression; 3) feature-weighted elastic net (fwelnet); 4) the random forest algorithm. The augmented ridge regression can be viewed as a standard ridge regression (2) with the design matrix X = [X, XZ]. While the augmented ridge regression is similar in form to two-level ridge regression (5), the main distinction is that only one tuning parameter is used to shrink both the subject-level and meta-feature effects (ϕ, γ). For the random forest algorithm we input the augmented design matrix X. For comparison purposes, we fix the elastic net tuning parameter to 0 so that fwelnet will coincide with ridge regularization. Ten-fold cross validation was used to estimate the tuning parameter(s) for the regularization methods. Results are averaged over 500 Monte Carlo replications.
Discrete Z
We simulated data loosely based on the breast cancer real data application in Section 4, with gene expression levels as the features and a quantitative outcome. We first consider the case where meta feature matrix Z consists of indicator columns corresponding to grouping of genes into (not necessarily disjoint) pathways. Specifically, we generate a binary matrix Z p×6 such that each column has on average 20% nonzero entries where we vary p = 400, 1,000, and 2,000. We then set γ = (0.1, 0.1, 0.1, 0.1, 0.1, 0.1) Conditional on Z and γ, we generate the subject-level features by sampling from a multivariate normal distribution β N p Zγ, σ β 2 I p . We determined how informative the meta features are for the effect sizes of β by defining the signal-to-noise ratio (SNR γ ) as where Σ Z is the empirical covariance matrix of Z and solving for σ β 2 . Finally, we generated the continuous outcome y|X, β N n Xβ, σ y 2 I n , where X N n 0, Σ X , with an autoregressive , with μ 0 = 0.2, ρ X = 0.5, and σ y = 1. To measure and compare predictive performance, we compute the test R 2 based on a test set of n = 1,000.
In general, we see that two-level ridge regression has better prediction performance when compared to its competitors ( Figure 1). As expected, all methods suffer in performance as the number of features increases (Panel A) and improve when the sample size increases (Panel B). In the "small data" scenario (n = 1000, p = 400, q = 6), we observe that fwelnet performs fairly well. However, its performance is comparable to the standard LASSO across several scenarios. This is unsurprising since the outcome is generated assuming that the meta features affect the mean of the subject-level features, not the variance. In both Panels A and B, we set the meta features to be moderately informative (SNR γ = 1). We evaluate the impact of the informativeness of the meta features by comparing the three methods across a range of SNR γ (Panel C). With the exception of the random forest algorithm, we see that two-level ridge regression performs similarly to the standard and augmented ridge regression and to fwelnet when the meta features are virtually uninformative (SNR γ = 0.001) and drastically outperforms them as informativeness increases. We also notice a substantial improvement in the prediction performance of the random forest algorithm as informativeness increases.
Continuous Z
Next we simulated data where the meta features are continuous, by drawing Z from a multivariate normal density. We let γ = 0.01 * (1 50 , 0 25 , 3 25 , 1 25 , 0 q−150 ) and generate Z p×q ~ q (0, Σ Z ), where Σ Z = ρ Z i − j ij . Similar to Section 3.1, we then simulate β N p Zγ, σ β 2 I p and y|X, β N n μ 0 + Xβ, σ y 2 I n , where X ~ p (0, Σ X ). We fix μ 0 = 0.5, ρ X = 0.5, ρ Z = 0, and σ y = 1. We compare the performance of all five methods across different values of n, p, q and SNR γ . Similar to Section 3.1, we consistently see a gain in prediction performance with the two-level ridge regression when compared to its competitors (Figure 2). When the feature dimension p increases, there is a degradation in prediction performance across all methods; however, incorporating the meta features in a hierarchical framework outperforms both the standard and augmented ridge methods. The trend was also consistent across varied σ y , ρ X and ρ Z (see Figure S1 in the Online Supplementary Materials).
In addition, we also vary the number of meta features in the model (Figure 2 Panel B). Note that as the number of meta features increases, the predictive performance of two-level ridge regression decreases while the performance of standard and augmented ridge regression remain unchanged. The degradation in prediction performance for the two-level ridge regression is expected since we are only increasing the number of noise variables in Z.
Surprisingly, the random forest algorithm performs poorly in all scenarios.
Binary Outcomes
To illustrate two-level ridge regression in a GLM framework, we also compared the performance of all methods under a binary outcome by extending the hierarchical model to logistic regression. The data generating process is similar to Section 3.2 however y|X, β ~ Bernoulli{π(μ 0 + Xβ)}, where π(·) = exp(·)/{1 + exp(·)}. Again, we fixed μ 0 = 0.5, ρ X = 0.5, and ρ Z = 0. The true predictive performance was determined as the area under the curve (AUC) for the test set of 1,000 observations. The results are similar to those observed in the continuous case (Figure 3).
Epigenetic Clock
Several studies have demonstrated that DNA methylation levels have strong effects on aging (see e.g., Berdyshev et al., 1967;Rakyan et al., 2010;Teschendorff et al., 2010;Koch and Wagner, 2011;Horvath et al., 2012;Bell et al., 2012). Using DNA methylation levels, epigenetic clocks (see e.g., Hannum et al., 2013;Horvath, 2013) attempt to accurately predict chronological age, with the goal of identifying molecular biomarkers of aging that can be used to study age acceleration and the relationship of methylation and disease (see e.g., Horvath, 2013;Horvath et al., 2015;Levine et al., 2015;Horvath et al., 2016;Quach et al., 2017). High-dimensional regularization techniques have been used to develop these tools. We evaluate the prediction performance of all three ridge regression models (standard, augmented, and two level) on a publicly-available dataset consisting of n = 656 individuals with methylation measured on the Infinium 450K platform. The size and structure of the data made competing methods inoperable. Both xrnet and glmnet permit sparse data structures which allowed us to analyze the data and compared to performance of two-level ridge regression to standard and augmented ridge regression.
While the total number of CpG sites available was 473,034; we reduced the dimensionality of the methylation data by only including the top 250,000 most variable probes. Further, we mapped the methylation probes to the closest gene in terms of physical distance. As meta features of interest we generated the indicators for whether a probe maps to a gene. Thus Z, our matrix of external information, consists of q columns that represent the q unique genes (the jth column of Z codes all probes that map to gene j as one and zero otherwise). After reducing the number of genes in the external data, by only considering genes that have at least 10 probes mapped to them, the resulting Z consists of 6,766 unique genes with an average of 33 probes per gene. In our analysis, we normalize Z by dividing each column by its sum (i.e. number of probes mapping to the corresponding gene). With this standardization the meta-feature estimate, γ j represents the average effect of all probes that map to gene j (j = 1, …, q) on chronological age. Of note is that both the features (methylation probes) and meta features (gene indicators) are high-dimensional.
We generated 50 training (80%) -test (20%) pairs by randomly splitting the 656 observations. For all three models, 10-fold cross validation is used to tune the penalty parameter(s) in each training data set. Similar to the simulation study, we assessed prediction performance using the test R 2 (averaged across all 50 test sets).
The two-level ridge regression significantly improved prediction performance over standard and augmented ridge (Figure 4). The mean test R 2 for standard, augmented, and two-level ridge regression were 0.71, 0.71 and 0.75, respectively, representing a 5.6% improvement in prediction performance when modeling both the methylation probes and their gene groupings hierarchically. By contrast, augmenting the original design matrix by XZ, i.e. by the linear combinations of the meta features according to Z, did not improve prediction.
Our analysis shows that hierarchical regularization, by adequately leveraging external information (i.e., groupings based on genes), can lead to improved performance in predicting chronological age compared to standard approaches for regularization.
Breast Cancer Mortality
We applied the proposed method on a data set of breast cancer tumors from the Molecular Taxonomy of Breast Cancer International Consortium (METABRIC) study available from the European Genome-Phenome Archive (https://ega-archive.org/studies/ EGAS00000000083) (Curtis et al., 2012). The data includes cDNA microarray profiling of close to two thousand breast cancer tumor specimens processed on the Illumina HT-12 v3 platform. The METABRIC study was used in an open-source competition (DREAM Breast Cancer Prognosis Challenge) to improve prediction of survival based on clinical characteristics, gene expression levels, and copy number variation. The primary tumors were originally divided into a discovery set of 997 samples and a validation set of 995 samples. In our analysis, we used the discovery set as the training set to fit the model and the validation set as the test set to evaluate the model performance in prediction. The METABRIC dataset also contains the patients' long-term clinical outcomes and pathological variables (e.g., age at diagnosis, number of positive lymph nodes). Due to significant heterogeneity in expression between ER+/HER2−, ER−, and HER2+ tumors, we restrict our analysis to the subset of patients who were ER+ and HER2−. Furthermore, we dichotomized the patients' survival time at 5 years and used this binary variable which indicates the 5-year survival of breast cancer as the outcome to predict. The sample sizes, after subsetting to ER+/HER2− patients not censored within 5 years, for the training and test datasets were 594 and 563, respectively and had a mortality (event) rate of 27% and 24%, respectively.
We use the gene expression data, consisting of 29, 477 probes (after pre-filtering), as our primary features in the analysis. A previous study by Cheng et al. (2013), developed a model made of four gene signatures (CIN, MES, LYM, and FGD3-SUSD3), referred to as "attractor metagenes", that captured molecular events known to be associated with clinical outcomes in many cancers. We generated four meta features by grouping probes that are in the same metagene. In the resulting 29, 477 × 4 matrix, the jth column codes all probes that are part of the jth metagene as one and zero otherwise. The CIN, MES, LYM, and FGD-SUSD3 metagenes each consist of 61, 70, 69, and 2 genes, respectively. We normalized each column of the meta feature matrix by the number of probes so that each column summed to one.
In addition to comparing two-level ridge regression to both standard and augmented ridge regression, we also implemented the following competing methods: xtune (Zeng et al., 2020), feature-weighted elastic net (fwelnet, Tay et al., 2021) and random forest (Breiman, 2001). The tuning parameter(s) for the five regularized models (two-level ridge, standard ridge, augmented ridge, xtune, fwelnet) were tuned using 10-fold cross validation. For comparison purposes we set the elastic net tuning parameter to 0 for fwelnet, which corresponds to a ridge penalty. A stratification scheme was used to generate the folds due to the class imbalance of cases and controls. Similar to our methylation example, the two-level ridge regression improves class prediction over its competitors (Table 1).
Discussion
In this paper, we proposed a two-level hierarchical ridge regression model that can directly incorporate meta features into the estimation. We show that the two-level ridge regression can be reformulated into a single-level ridge regression with two tuning parameters, enabling an efficient model coordinate descent fitting algorithm that can handle large numbers of features and meta-features. We provide closed-form solutions under simple scenarios to gain intuition on how the incorporation of meta features impact the estimation of the regression coefficients by borrowing information.
Our simulation results demonstrate that, in general, two-level ridge regression outperforms its competitors when relevant meta features are available. Importantly, in the presence of non-informative meta features, two-level ridge regression has comparable to only slightly worse performance compared to standard ridge regression without meta features. Thus, there is essentially "no cost", in terms of prediction performance, when incorporating a set of meta features a researcher deems relevant into the model building process. We also illustrate the advantage of our proposed model in two real data applications where we observe improved prediction performance for both continuous and binary outcomes.
We envision several future paths to further improve two-level regularization. First, our current method focuses on incorporating an ℓ 2 penalty for both the subject-level features and meta features. In general, ℓ 2 regularization has been criticized for not being able to perform variable selection (i.e., identifying important predictor variables that are associated with the response of interest), since the ridge penalty shrinks the regression coefficient estimate toward zero, but not exactly to zero. We are currently investigating ways to allow for more general penalties (e.g., LASSO, elastic net, etc.) for both subject-level and meta-feature regularization to allow for variable selection. Second, our real data application focused on five-year mortality as the outcome of interest. While this was done to illustrate the performance of two-level ridge regression for binary outcomes, it would be preferred to model the survival time directly. The Cox (1972) model is a well-appreciated approach to model feature effects on survival (through the conditional hazard function). We are currently developing the two-level regression with a range of penalties, including lasso and elastic net in addition to ridge, as well as a two-level regularized Cox model, which involves replacing the log-likelihood in (9) with the Cox (1975) log-partial likelihood. We expect the implementations of these methods within the two-level regularization framework to provide a wide range of analytical options for integrating prior information into high-dimensional genomic studies.
Supplementary Material
Refer to Web version on PubMed Central for supplementary material.
Figure 4:
Epigenetic clock: boxplot of test R 2 from 50 training (80%) -test (20%) pairs by randomly splitting the 656 observations. Ten-fold cross validation was used to estimate the tuning parameter(s) for each method. (See Section 4.1 for more information). | 2021-12-16T17:02:59.098Z | 2021-12-13T00:00:00.000 | {
"year": 2021,
"sha1": "49409a61e13fdb94a752cb3d3ca284d8f5af0d94",
"oa_license": "CCBY",
"oa_url": "https://jds-online.org/journal/JDS/article/1262/file/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a747f97953a54058f37232953b5d89e5600aecf9",
"s2fieldsofstudy": [
"Computer Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
232238444 | pes2o/s2orc | v3-fos-license | Challenges to vital registration in Nigeria
The objective of this study was to investigate and reveal the socio-demographic factors affecting civil registration in Nigeria. The study employed the cross-sectional survey design. Instruments were administered to 600 participants sampled through a systematic random technique. SPSS (version 22) was used to analyze the data. Local importance attached to civil registration was different towards birth, death and marriage respectively. The multivariate Logistic Regression analyses showed that residential location (rural or urban), income and education were important factors affecting attitudes to civil registration; and these factors correlated with birth, death and marriage to different degrees. We conclude that, in addition to availability of institutional provisions, moderating variables peculiar to each cultural area should be factored into efforts to improve civil registration.
INTRODUCTION
Vital registration (also known as civil registration) is the continuous collection, recording, collation, analysis, presentation and distribution of data on the occurrences and characteristics of vital events, such as live birth, death, fetal death, marriage, divorce, adoption, legitimization of birth, recognition of parenthood, annulment of marriage, or legal separation. Records from these events are called vital statistics (Brolan and Gouda, 2017;Mikkelsen et al., 2015). Vital statistics are necessary for countries to compile accurate, complete and timely vital statistics about birth, death and marriage, which, along with population census, are central to estimating population size and demographic dynamics. The cause of death data from civil registration systems, for instance, is vital for pinpointing diseases and injuries that have become common in a population (UNICEF, 2018).
Given the huge importance of vital statistics to the public and the policy-makers, it is worrisome that this issue has received so little attention in literature focusing on postcolonial states such as Nigeria, especially in terms of the contextual factors that challenge this crucial practice. In part, this is because vital statistics are taken for granted in the global centers of research where all births, deaths and marriages are routinely registered. In these countries, vital statistics are readily available for governments to monitor and to use for social and economic planning in key sectors such as health, education, employment and housing (Williams, 2014). However, governments in low and middle-income countries like Nigeria have the same need for data for planning development and for ensuring the effective use of limited resources.
As a result, there is now a momentum-building within *Corresponding author. E-mail: chidi.ugwu@unn.edu.ng.
Author(s) agree that this article remain permanently open access under the terms of the Creative Commons Attribution License 4.0 International License these countries and within the global development community towards the strengthening of civil registration and vital statistics (CRVS) systems (Makinde et al., 2016). There are cultural and contextual constraints to proper civil registration and vital statistics that need to be checked in order for vital statistics to be readily available to the government for planning purposes. These are financial constraints; lacunae in the legal basis; organizational problems affecting supervision and control arising out of the uncoordinated involvement of several agencies in the registration system; low priority assigned to registration work by administrators and policy makers, resulting in indifferent performance of the registration hierarchy; public apathy for lack of effective incentives; and lack of trained manpower. The time lag in the compilation and presentation of data remains a common problem as well. This can manifest in one or more of the following ways: delays in transmission of sub-national compilations; delays in receipt of registration records from sub-national areas at the central office; delayed registration, causing bottle-necks in tabulation by date of occurrence; absence or incompleteness of information in the basic record etc. (Isara and Atimati, 2015;United Nations, 2017).
Lack of awareness of vital registration is a major development problem in Nigeria. Tobin et al. (2013), in their study done in Edo State, Southern Nigeria, found that as much as 40% of respondents had not heard of death registration, and 15% did not know the relevance. An earlier study by Akande and Sekoni (2005) in Kwara State, North-central Nigeria, had as well revealed low awareness of the importance of vital registration. It was confirmed in another study by UNICEF around the same period that about 70% of the over five million annual births in Nigeria went unregistered (UNICEF, Nigeria, 2007). Interestingly, there is as yet no national average for death and marriage registration in Nigeria. In view of the above, Makinde et al. (2016) contend that there is a need for adequate demographic parameters and data especially in developing countries where there is a huge research gap. Dake and Fuseini (2018) therefore saw a need for a vibrant vital registration system which would allow the monitoring of trends and tracing of progress in the population.
The few studies exploring the reasons for poor vital statistics in Nigeria show that a large percentage of the populace are aware of vital registration, particularly of birth, but practice remains scant (Makinde et al., 2016). It remains to be investigated whether gaps in awareness, lack of clarity about the registration process, individual perceptions, and other socio-cultural impediments may be contributory to this observed pattern. This study, therefore, aims to investigate the contextual (that is structural and public perception) factors that pose challenges to vital statistics in Nigeria. The study feeds into the need for indigenous researchers to conduct baseline studies to come to terms with contextual challenges to vital statistics in sub-Saharan Africa. This study will uncover the extant situation of CRVS in the sample states, and will reflect the condition in Nigeria since CRVS is driven and coordinated nationally. Although some studies suggest that these factors can be local and peculiar, the findings of this research might suggest whether some general approach can be used to address the poor state of the CRVS in Nigeria, or whether studies should be commissioned to generate more locally situated suggestions across the states. We hold firmly that the pathway(s) to a workable and sustainable vital registration system in Nigeria can be articulated using the lessons that issue from the findings of this study.
Theoretical orientation
The Social Ecological Theory (SET) provides the conceptual framework for this study. SET was first introduced as a conceptual model for understanding human development by Urie Bronfenbrenner in the 1970s and later formalized as a theory in the 1980s (Bronfenbrenner, 1977). The theory tries to understand the multifaceted levels of human development within a society and how individuals and the environment interact within a social system. It describes five levels of influence on behavior; individual, interpersonal, organization, community and public policy (Newes-Adeyi et al., 2000). The individual level is concerned with an individual's level of awareness and knowledge of civil registration. The interpersonal level has to do with a person's relationships with other people like family, friends, and so on and how this helps to inform the individual's attitude about civil registration. At the organizational level, the theory explains how organizations like schools, hospitals and others can galvanize to ensure complete and comprehensive civil registration systems. The community level has to do with the culmination of the various organizations in an area to promote civil registration system. For instance, adequate information about civil registration can enhance its efficiency at the community level. Thus, effort targeted efforts at stressing the importance of civil registration at community level can help to change people's attitude about civil registration. The final level which is the policy level stresses the need for policy support to strengthen civil registration system. The existence of civil registration system in Nigeria requires some strategic actions to encourage the registration of vital events. Socio-ecological model has been used to analyze a wide range of issues including public health promotion, violence prevention, safety in agricultural environment, government programs etc. (Kilanowski, 2017;Golden and Earp, 2012;Hong et al., 2012). The theory is considered most appropriate for this study because it offers explanation to how reciprocal interactions affect civil registration systems in Nigeria.
Study design and setting
The study adopted the cross-sectional survey research design in generating data to answer the research questions and test the hypotheses. The study location was southeastern Nigeria, one of the six geo-political zones of the country. The area comprises five states: Abia, Anambra, Ebonyi, Enugu and Imo. Each of these states has three senatorial zones and a number of local government areas (LGAs), with each having at least one vital registration centre. The 2006 population census estimated the combined population of these states at 16, 395, 555. Most of their populations are Igbo and predominantly Christian. We purposively selected the southeastern region for convenience of language, distance and spatial familiarity. We used urban and rural indices to stratify the five states into two and purposively selected Ebonyi to represent the more rural states and Enugu to represent the more urbanized states. Enugu State is chosen because, being the administrative headquarters of the old Eastern Region, it is likely to be bureaucratically and administrative mature, in contradistinction with Ebonyi which is a relatively young state and ordinarily likely to be less administratively advanced than Enugu. Thus, placing the results from both states side-by-side will provide useful comparative insights. Ebonyi and Enugu states have 2019 projected populations of 3,025,956 and 4,542,446 respectively.
Participants
The population for this study comprises adult males and females of 18 years and above who have had at least one child. The justification for working with this category of the population is that they are the set of people most likely to have relevant experiences on the subject matter. A sample of 600 adult males and females, which was considered representative enough for the nature of the study, was engaged. A multistage sampling procedurewhich entailed successive selection of LGAs, wards, housing units and respondentswas employed. We stratified the LGAs in each sample state according to urban and rural indices and randomly selected one from each of the two strata. Simple random sampling was used to select two wards from each of the selected LGAs, giving us eight wards in all. In each of the eight selected wards across the two sample states, we visited households using the systematic random sampling technique until we arrived at 75 respondents. Doing this across the eight wards gave us a total of 600. (We ignored village boundaries in our sample selection since villages within the wards are not far from each other and do not differ in any significant ways, especially with regards to the subject matter of this study.)
Data collection
To explore the research questions, we employed an interviewerbased questionnaire designed to provide information on the sociodemographic characteristics of the respondents as well as their views and practices on vital registration. The questionnaire items were translated into Igbo, the local language spoken in the sample states. Each fieldworker was given equal numbers of English and Igbo versions of the instrument. This was to ensure that translations would not vary should the fieldworker face situations that would warrant any translations. This helps to smoothen the data collection Atama et al. 3 process. Other-administered technique was adopted for administering the questionnaire; this was to avoid possible misunderstanding of any questions by the respondent, which would have led them to supply unintended responses. This method of administration also ensured that all 600 copies of the questionnaire were filled and returned, avoiding loss and damage that sometimes happen with the self-administered approach.
Data management
All collected data were double-checked by the field supervisors before entry into the computer. Statistical Packages for the Social Science (SPSS) was used to analyze the data. A univariate analysis was done to highlight the distribution of respondents by key variables including birth, death and marriage registration. Multivariate Logistic regression was used to respectively examine the relationship between each of the sociodemographic and socioeconomic variables on the registration of birth, death and marriage.
Participants' profiles
A total of 600 respondents, aged between 18-88 years old with a mean age of 40.8, were sampled for the study. As shown in Table 1, the average age was 41 years with a standard deviation of 13.74. Most of the respondents were females (60.7%) with up to 81.7% married. Civil/public service (37%) and trading (34%) were the occupations that most of the respondents engaged in. Most of the respondents seemed highly educated with as many as 43% having completed tertiary education. About half (49%) of the respondents were Catholics. Birth registration appeared to be quite common among the respondents with as many as 73.5% reporting that they had registered someone's birth. Registration of the death of one's relatives however was quite an uncommon practice with only 33.8% reporting that they did. Finally, 66.8% of those who had at some point married in their lives reported that they registered their marriage.
Birth registration
The multivariate analyses for registration of birth show that state of residence, sex, marital status, number of children born and number of children living were significant predictors of birth registration. Compared to people who resided in Ebonyi, people who resided in Enugu were less likely to register birth (OR 0.046; CI 0.005-0.440). Compared to females, males were less likely to register the birth of their child (OR 0.247; CI 0.101-0.607). Those who had ever married were more likely to register for birth than those who had never married (OR 3.272; CI 1.014-10.558). The likelihood to register for birth decreased with the number of children born (OR 0.509; CI 0.257-1.007); while it increased with
DISCUSSION
The study was conducted to assess the contextual factors affecting the practice of vital registration in the study area with emphases on the registration of birth, death and marriage. Our findings showed that 73.5% of the respondents registered a child in their household in a birth and death registry or any other agency. This is an indication that birth registration is continuing the trend of increase from 31.5% in 2007 to 41.5% in 2011 as revealed by Makinde et al. (2016). Williams (2014) reported a higher figure of 84.8% having taken into consideration the certificate cited and affirmation from parents. Increase in birth registration may not be unconnected with an improved awareness of birth registration reported in many parts of Nigeria (Akande and Sekoni, 2005;Tobin et al., 2013). Despite the fact that the rate of birth registration in our study is higher than the national rate of 33.09% for under 1 age, 31.19% for 1-5% 35.72% for above 5 years (NBS, 2018), it is still considered suboptimal when compared with industrialized nations that register nearly all their births. The multivariate logistic regression (Table 2) suggests that contextual (or social location) factors affect the registration of all births in Nigeria. Our comparison of two states (Ebonyi and Enugu) in south-eastern region of Nigeria showed marked differences in the practice of birth registration. People from Ebonyi were more inclined to registering for birth than people from Enugu, the more urbanized of the two sample states. Though NPC and ICF (2014) Our study also showed that sex is one of the factors influencing the registration of birth. Males were less likely to register the birth of a child in their household than females. The reason for this might not be unconnected to the fact that men are not major users of health facilities compared to women. Utilisation of health facilities improves chances of contact with health administrators who have been known to influence birth registration. In that light, Makinde et al. (2016) have shown that women that received Antenatal Care (ANC) had greater odd of registering the birth of their child. Ever been married was found to be a predictor of birth registration. Similar finding has been reported by Isara and Atimati (2015) as well as well Tobin et al. (2013). Marriage facilitates information sharing, and as such, unmarried people may not always have someone close to enlighten and motivate them to register their birth. We also found that the likelihood to register birth decreased with the number of children one had while it increased with number of children alive. Again, this may equally be connected to the importance attached to children.
As the study revealed, the number of people who registered the death of a relative is quite low (33.8%). Tobin et al. (2013) reported similarly low figure of 39.7% in a study in Southern Nigeria. Low registration of death may be because of the circumstances surrounding death. To illustrate, birth occurs mostly in health facilities where health professionals can influence the registration process (Dake and Fuseini, 2018), most deaths, however, occur outside health facilities where health workers cannot influence the registration process. Also, people have been known to purposely underreport deaths for social and cultural reasons (Williams, 2014); and this would limit the willingness to register the death of a relation.
Unlike birth registration, place of residence and educational status are significantly associated with death registration. Specifically, urbanites and people with secondary and tertiary education predicted the likelihood of death registration. These two factors have something in commonexposure to more information and awareness of the importance of civil registration. As such, it is suggestive that the reason for the registration of deaths by people in these two categories has to do with awareness and knowledge of its importance. Similar findings on high educational status being a determining factor on the registration of birth and death have as well been reported by Isara and Atimati (2015).
In our study, only 66.8% of the ever-married respondents indicated that their union was registered. Again, the challenges associated with both birth and death registration may also extend to marriage registration. Moreover, unavailable law on compulsory registration of marriage may be blamed. But the registration of marriage is very important as it stamps legality on the union especially during official matters and divorce (Mian and Hossain, 2013). Among the factors investigated, our study showed that place of residence was significantly associated with marriage registration: urban dwellers were more likely to register their marriage than rural dwellers. Again, urban dwellers are better exposed to the knowledge and importance of marriage registration. Besides, considering the financial commitment involved, rural dwellers are likely to find it rather challenging because of higher rate of poverty in rural areas. Bennouna et al. (2016) found in their study in Indonesia that financial constraints and bureaucratic bottlenecks were major barriers to the registration of marriage.
Our results also showed that the likelihood of marriage registration increases with age. This may be because older couples have the need for it than younger couples. Occupation, especially with regards to traders and civil/public servants, acted as a predictor of marriage registration. These occupations offer room for more income than farming, and as earlier presented, finance is an important factor in marriage registration. Moreover, there are added benefits in one's marital status for those working in government parastatals. These benefits could influence the registration of marriages by civil/public servants. Although not much is written on marriage, the report by NBS (2018) indicates that the number of people who reported registering their marriage is falling. With the current poor socioeconomic situation in Nigeria, many may not afford the cost of marriage registration. As shown in our study, completing any educational level is a predictor for the registration of marriage. The role that education plays in all vital registration has been clearly acknowledged in this study. Education helps people appreciate the importance of civic registration. In addition, number of children is a significant predictor of marriage and as our study shows, the number of children one has decreased the odds to register.
Conclusion
Our results revealed that the challenges of civil registration were tied to its perceived relevance in the society and its interception with one's sociodemographic. For instance, registration of birth was highest because since it had something to do with one's legal identity and serves as requirements for inclusion in various government institutions, social and private establishments. For death registration, however, the low rate of practice may serve as an indication of unrecognized relevance. As such, the importance of sensitization cannot be downplayed. Further, birth certificate is prerequisite for school enrolment and employment into most formal organizations. All these are in line with the SET put forward by Bronfenbrenner (1977), that what affects vital registration are at the individual, interpersonal, organizational, community and public policy level. Since the Nigerian National Population Commission (NPC) (2008) had earlier noted that one of the main challenges of vital registration was publicity, persuasive programmes on media outlets should therefore be used to create awareness, to highlight availability of civil registration centres as well as the individual and national benefits of complying. Some compulsive measures could also be employed to improve compliance should persuasion alone not produce desired results.
In addition to the factors of awareness, availability and robustness of institutional provisions, moderating variables peculiar to each cultural area should be factored Atama et al. 7 into efforts to improve civil registration. Despite the fact that our study is limited by the non-addition of qualitative data, we nonetheless succeeded in raising a fresh perspective as to how contextual (cultural/social) factors could influence people's attitudes to civil registration, and therefore affect rates of its success in different places, no matter the extent of institutional arrangements for the purpose. In addition to the factors of awareness, availability and robustness of institutional provisions, moderating variables peculiar to each cultural area should be factored into efforts to improve civil registration. | 2021-03-15T14:50:27.247Z | 2021-03-31T00:00:00.000 | {
"year": 2021,
"sha1": "24c62e15cf3e003552dae80ba6544c463590ed9b",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/IJSA/article-full-text-pdf/620DBFE66221.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "24c62e15cf3e003552dae80ba6544c463590ed9b",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Geography"
]
} |
260681278 | pes2o/s2orc | v3-fos-license | Meta-Tsallis-Entropy Minimization: A New Self-Training Approach for Domain Adaptation on Text Classification
Text classification is a fundamental task for natural language processing, and adapting text classification models across domains has broad applications. Self-training generates pseudo-examples from the model's predictions and iteratively trains on the pseudo-examples, i.e., minimizes the loss on the source domain and the Gibbs entropy on the target domain. However, Gibbs entropy is sensitive to prediction errors, and thus, self-training tends to fail when the domain shift is large. In this paper, we propose Meta-Tsallis Entropy minimization (MTEM), which applies a meta-learning algorithm to optimize the instance adaptive Tsallis entropy on the target domain. To reduce the computation cost of MTEM, we propose an approximation technique to approximate the Second-order derivation involved in the meta-learning. To efficiently generate pseudo labels, we propose an annealing sampling mechanism for exploring the model's prediction probability. Theoretically, we prove the convergence of the meta-learning algorithm in MTEM and analyze the effectiveness of MTEM in achieving domain adaptation. Experimentally, MTEM improves the adaptation performance of BERT with an average of 4 percent on the benchmark dataset.
Introduction
Text classification plays a crucial role in language understanding and anomaly detection for social media text. With the recent advance of deep learningf [Kipf and Welling, 2017;Devlin et al., 2019], text classification has experienced remarkable progress. Despite the success, existing text classification approaches are vulnerable to domain shift. When transferred to a new domain, a well-performed model undergoes severe performance deterioration. To address such deterioration, domain adaptation, which aims to adapt a model trained on one domain to a new domain, has attracted much attention [Du et al., 2020;Lu et al., 2022].
A direct way to achieve domain adaptation is to build a training set that approximates the distribution of the target . domain. For this purpose, self-training [Zou et al., 2019;Liu et al., 2021] uses the unlabeled data from the target domain to bootstrap the model. In specific, self-training first uses the model's prediction to generate pseudo-labels and then uses the pseudo-labeled data to re-train the model. In this process, self-training forces the model to increase its confidence in the confident class, which is a Gibbs entropy minimization process in essence [Lee and others, 2013].
However, Gibbs entropy minimization is sensitive to prediction errors [Mukherjee and Awadallah, 2020]. To handle the intractable label noise (i.e., prediction errors), data selection strategies are designed to select reliable pseudo labels [McClosky et al., 2006;Reichart and Rappoport, 2007;Rotman and Reichart, 2019]. Among them, many qualified achievements [RoyChowdhury et al., 2019;Shin et al., 2020] are grounded on prior knowledge about the tasks (e.g., the temporal consistency on video [RoyChowdhury et al., 2019]), and thus hard to be applied in text classification tasks. Since the Gibbs entropy minimization process in self-training is to minimize the model's uncertainty on the new domain, [Liu et al., 2021] recently proposes to replace the Gibbs entropy with the Tsallis entropy, which is another effective metric for measuring uncertainty.
Tsallis entropy is a generalization of Gibbs entropy, referring to a set of entropy types controlled by the entropy index. Fig. 1 shows the change of Tsallis entropy with different entropy indexes for binary problems. When the entropy index is small (the resultant entropy curve is sharp), the entropy minimization process tends to increase one dimension to 1.0 sharply, thus only being suitable for the scenario where pseudo labels are reliable. Otherwise, Tsallis entropy with a larger entropy index (smoother curve) is more suitable for scenarios with large label noise, e.g., domain adaptation scenarios with a large domain shift. Researchers [Liu et al., 2021] tried to use the Tsallis entropy to improve self-training, but the proposed objective only involves a unified entropy index for all unlabeled data in the target domain. As illustrated in [Kumar et al., 2010;Kumar et al., 2020], different instances in the target domain have different degrees of shifts from the source domain. Thus, a unified entropy index cannot fully exploit the different pseudo instances in the target domain.
In this paper, we propose Meta-Tsallis-Entropy Minimization (MTEM) that uses an instance adaptive Tsallis entropy minimization process to minimize the model's prediction uncertainty on the target domain. Since the best entropy indexes changes along with the training, manually selecting an appropriate entropy index for each unlabeled data is intractable. Thus, we employ meta-learning to adaptively learn a suitable entropy index for each unlabeled data. The meta-learning process iterates over the inner loop on the target domain and outer loop on the source domain. In this process, the parameters optimized on the target domain also achieves a low loss on the source domain, which forces the model to obtain task informations on the target domain. However, the proposed MTEM still faces two challenges.
Firstly, the meta-learning algorithm in MTEM involves a Second-order derivation (i.e., the gradient of the entropy index), which requires much computation cost, especially when the model is large. Hence, it is hard to apply MTEM for prevailing big pre-trained language models. To this end, we propose to approximate the Second-order derivation via a Taylor expansion, which reduces the computation cost substantially.
Secondly, minimizing Tsallis entropy requires the guidance of pseudo labels (see § 2.2 and § 2.3). Previous selftraining approaches generate pseudo labels by selecting the prediction with the largest probability (i.e., greedy selection), which tends to collapse when the model's prediction is unreliable [Zou et al., 2019]. To this end, we propose to sample pseudo labels from the model's predicted distribution instead of a greedy selection. Further, we propose an annealing sampling mechanism to improve the sampling efficiency.
To summarize, our contributions are in three folds 1 : (i) We propose Meta-Tsallis-Entropy Minimization (MTEM) for domain adaptation on text classification. MTEM involves an approximation technique to accelerate the computation, and an annealing sampling mechanism to improve the sampling efficiency. (ii) We provide theoretical analysis for the MTEM, including its effectiveness in achieving domain adaptation and the convergence of the involved meta-learning process. (iii) Experiments on two benchmark datasets demonstrate the effectiveness of the MTEM. Specifically, MTEM improves BERT on cross-domain sentiment classification tasks with an 1 As the rest paper involves many mathematic symbols, we provide a symbol list (Tab. 7 in Appendix A) for reading convenience. average of 4 percent, and improves BiGCN on cross-domain rumor detection task with an average of 21 percent.
Domain Adaptation on Text Classification
Text classification is a task that aims to map a text to a specific label space. On a correct classification case, the process is expressed as y i = arg max is an input text, y i ∈ {0, 1} K is the corresponding one-hot label with K classes, and f is a model with parameters θ, f (x i ; θ) is the prediction probability. Domain adaptation is to adapt a text classification model trained on the source domain (denoted as D S ) to the target domain (denoted as D T ). On the source domain, we have a set of labeled instances, i.e., , which satisfies that D S ⊆ D S . On the target domain, unlabeled text in the target domain is available, which we denote as D u
Tsallis Entropy
In information theory, Tsallis entropy refers to a set of entropy types, where the entropy index is used to identify a specific entropy. Formally, Tsallis entropy with α denoting the entropy index is written as Eq. (1), where p i is the prediction probability. When α > 1, e α is a concave function [Plastino and Plastino, 1999]. When α → 1, e α recovers the Gibbs entropy, as shown in Eq.
(2) 2 : More intuitively, Fig. 1 exhibits the impact of the entropy index on the curves of the Tsallis entropy type. Specifically, a larger entropy index makes the curve more smooth, while a smaller entropy index exerts a more sharp curve.
Extending from the unsupervised Tsallis entropy, the corresponding Tsallis loss ℓ α (p i , y i ) is expressed as Eq. (3). When α → 1, the corresponding supervised loss is the widely used cross-entropy loss (see Appendix B.1).
Self-Training for Domain Adaptation
Self-training aims to achieve domain adaptation by optimizing the model's parameters with respect to the supervised loss on the source domain and the unsupervised loss (prediction uncertainty) on the target domain, as shown in Eq. (4).
where L S is a supervised loss, L T is an unsupervised loss, and λ is a coefficient to balance L S and L T . Due to the simplicity, Gibbs entropy is widely used to measure the prediction uncertainty on the target domain [Zou et al., 2019;Zou et al., 2018], which is expressed as below: However, as L T (θ|D u T ) in Eq. (5) is a concave function, minimizing L T (θ|D u T ) is hard to converge because the gradients on the minimal are larger than 0 [Benson, 1995]. For this purpose, self-training uses pseudo labels to guide the entropy minimization process, i.e., replacing Eq. (5) with Eq (6) 3 Meta-Tsallis-Entropy Minimization MTEM inherits the basic framework of self-training, i.e., (i) simultaneously minimizing the supervised loss on the source domain and the unsupervised loss (prediction uncertainty) on the target domain; (ii) generating the pseudo labels to guide the entropy minimization process. The improvements of the MTEM are in three folds. Firstly, we propose an instance adaptive Tsallis entropy to measure the prediction uncertainty ( § 3.1). Secondly, we propose to use a meta-learning algorithm to minimize the joint loss ( § 3.2), which involves an approximation technique to reduce computation cost ( § 3.3). Thirdly, we propose to generate pseudo labels with an annealing sampling ( § 3.4). Fig. 2 exhibits the overview of the MTEM, and Algorithm 1 presents the core process.
Instance Adaptive Tsallis Entropy
The instance adaptive Tsallis Entropy, i.e., the unsupervised loss on the target domain, is as below: where ψ [k] indicates the entropy index for unlabeled data x k , e ψ [k] is the resultant Tsallis entropy.
Such an instance Tsallis entropy minimization is more effective in exploiting the model's prediction. In general, the prediction correctness is different on different instances. Thus, entropy index should be different on different instances too. For the instances with wrong prediction, we can increase the entropy index to make Tsallis entropy more smooth, then the model is updated more cauciously. Otherwise, for instances with correct prediction, we can set a small entropy index to update the model more aggressively.
However, as we are not aware of the label, setting the appropriate entropy indexes for each unlabeled data is intractable. Furthermore, as prediction errors can be corrected during the model training, the best entropy indexes change along with the updates of the model. To handle the above issues, we propose to use meta-learning to determine the entropy indexes automatically.
Meta-Learning
Meta-learning can help MTEM to find the appropriate entropy indexes due to the following reasons. Firstly, parameters optimized on a well-determined instance adaptive Tsallis entropy (entropy indexes are determined appropriately) should be more generalizable, which corresponds to the characteristics of meta-learning, i.e., training a model that can be fast adapted to a new task [Finn et al., 2017]. Secondly, the meta-learning process updates the entropy indexes dynamically, thus maintaining the consistency between the model's parameters and the entropy indexes along with the whole training process. In specific, the meta-learning algorithm in MTEM iterates over the Inner-Loop on the target data and the Outer-Loop on the source domain.
In the Inner-Loop, we fix the entropy indexes to optimize the model's parameters with respect to the instance-adaptive Tsallis entropy on the target domain. In specific, we sample a batch of unlabeled data B from D u T and update the model with respect to their instance adaptive Tsallis entropy (L T (θ, ψ|B)), as shown in Eq. (8): However, as introduced in § 2.2 and § 2.3, e ψ in Eq. (7) is a concave function hard to be minimized. Following the way in self-training, we use the equation in Eq. (9) 3 to transform a concave function to a convex function, whereỹ i ∈ {0, 1} K is a one-hot pseudo label sampled from the model's prediction probability (i.e.,ỹ i ∼ f (x i ; θ t )). Then, the objective in the Inner-Loop is as Eq. (10): In the Outer-Loop, we validate the model updated (i.e., θ t+1 (ψ t ) in Eq. (8)) with labeled data from the source domain. Since different entropy indexes ψ leads to different θ t+1 (ψ t ), we adjust ψ to find the betterθ t+1 (ψ t ) that can be fast adapted to the validation set. For this purpose, we optimize the entropy indexes ψ to minimize the validation loss.
In each meta-validation step, we sample a valid batch of labeled data from the source domain, i.e., V ∼ D S , and use the validation loss L S (θ t+1 (ψ t )|V) to evaluate the model, then update entropy indexes ψ with ▽ ψ L S (θ t+1 (ψ t )|V). With the updated entropy indexes ψ t+1 , we return to update the model's parameters, as shown in line 8 of Algorithm 1.
Taylor Approximation Technique
The first challenge in the above meta-learning algorithm is the computation cost carried out in the ▽ ψ L S (θ t+1 (ψ t )|V).
where ϵ is a small scalar, θ + and θ − are defined as below: where L T (θ) is the abbreviation of L T (θ, ψ t |B). As demonstrated in [Liu et al., 2018], Eq. (12) would be accurate enough for approximation when ϵ is small. However, computing ▽ ψ L T in Eq. (12) still requires much computation cost as it involves a forward operation (i.e., L T ) and a backward operation (i.e., ▽ ψ L T ). To this end, we derive the explicit form of ▽ ψ L T as Eq. (14) (details are in Appendix B.3).
l 1 (x i ,ỹ i ) and l ψ [i] in Eq. (14) can be computed without gradients, thus preventing the time-consuming back-propagation process. Therefore, computing ▽ ψ L T with the above explicit form can further reduce the computation cost.
Annealing Sampling
In domain adaptation, the naive sampling mechanism in the Inner-Loop can suffer from the lowly efficient sampling problem. When the domain shift is large, the model usually performs worse in the target domain than in the source domain. As a result, the model's prediction confidence (i.e., the sampling probability) on the true class is small. Considering an extreme binary classification case, where the instance's ground truth label is [0, 1] t but the model's prediction is [0.99, 0.01] t , the probability of sampling the ground truth label is 0.01. In this case, most of the training cost is wasted on the pseudo instances with error labels.
To improve the sampling efficiency, we propose an annealing sampling mechanism. With a temperature parameter κ, we control the sharpness of the model's prediction probability (sampling probability) by p(•; θ, x i , κ) = sof tmax( score κ ), where p is the sampling probability and score is the model's original prediction score. In the earlier training phase, the model's prediction is not that reliable, so we set a hightemperature parameter κ to smooth the model's prediction distribution. With this setting, different class labels are sampled with roughly equal probability, which guarantees the possibility of sampling the correct pseudo label. Along with the convergence of the training process, the model's prediction is more and more reliable, thus the temperature scheduler will decrease the model's temperature. We design a temperature scheduler as Eq. (15): where σ denotes the sigmoid function 4 , κ max and κ min are the expected maximum temperature and minimum temperature, s is a manual set positive scalar. t is the index of the current training iteration, T max is the maximum of the training iterations. Thus, t Tmax increases from 0.0 to 1.0, and the input s − 2s × t Tmax decreases from s to −s. In our implementation, s is large value that satisfies σ(s) ≈ 1.0 and σ(−s) ≈ 0.0, which gurantees that κ t will decrease from κ max to κ min .
Theoretical Analysis
Proofs of Lemma 1, Theorem 1, Theorem 2, and Theorem 3 are detailed in Appendix A. Lemma 1. Suppose the operations in the base model is Lipschitz smooth, then ℓ ψ [i] (f (x i , θ),ỹ i ) is Lipschitz smooth with respect to θ for ∀ψ [i] > 1 and ∀x i ∈ D S D u T , i.e., there exists a finite constant ρ 1 and a finite constant L 1 that satisfy: is Lipschitz smooth with respect to ψ [i] , i.e., there exists a finite constant ρ 2 and a finite constant L 2 that satisfy: Assumption 1. The learning rate η t (line 10 of Algorithm 1) satisfies η t = min{1, k1 t } for some k 1 > 0, where k1 t < 1. In addition, The learning rate β t (line 8 of Algorithm 1) is a monotone descent sequence and β t = min{ 1 L , k2 k2 ≥ L. Based on the Assumption 2 and Lemma 1, we deduce Theorem 1 and Theorem 2. Theorem 1 demonstrates that, by adjusting ψ, the model trained on the target domain can be generalized to the source domain immediately. In other words, by adjusting ψ, the learning process on the target domain (i.e., Eq. (10)) has learned the domain agnostic features. At the same time, Theorem 2 guarantees the convergence of the learning process on the target domain. Theorem 1. The training process in MTEM can achieve where C is an independent constant. Theorem 2. With the training process in MTEM, the instance adaptive Tsallis entropy is guaranteed to be converged on unlabeled data. Formally, We use hypothesis h : X → ∆ K−1 to analyze the effectiveness of MTEM in achieving domain adaptation. For- the model's robustness to the perturbations on dataset D. We letR(H| D ) denote the Rademacher complexity [Gnecco et al., 2008] of function class H (h ∈ H) on dataset D. Radmacher complexity evaluates the ability of the worst hypothesis h ∈ H in fitting random labels. If there exists a h ∈ H that fits most random labels on D, thenR(H| D ) is large. With the above definitions, we deduce Theorem 3. et al., 2021] for some constant q, c ∈ (0, 1). With the probability at least 1 − δ over the drawing of D u T from D T , the error rates of the model h θ (h ∈ H) on the target domain (i.e., ϵ D T (h θ )) is bounded by: With Theorem 3, we demonstrate that: 1. Theorem 1 and Theorem 2 prove that MTEM can simultaneously optimize the ψ and θ to minimize L S (h|D S )+ L T (h, ψ|D u T ), i.e., the first two term in Eq. (17). 2. With the bi-level optimization process, the learning process on D u T is regularized by supervised loss on the source domain. As D S does not overlap with D u T , fitting the random labels on D u T cannot carry out the decrease of the supervised loss on the source domain (i.e., L S (θ|D S )). Thus, h ∈ H fits less noise on D u T , reduc-ingR(H × H| D u T ). At the same time, as D S is unseen in the training process, it is also hard to fit the random label on D S , thereby reducingR(H| D S ).
3. Instance adaptive Tsallis-entropy is an unsupervised loss. As accessing unlabeled data is easier than accessing the labeled data, MTEM provides the possibility of sampling a larger unlabeled data to make ζ smaller.
is a term that can be minimized in the training process technically, e.g., adversarial training [Jiang et al., 2020] or SAM (Sharpness-Aware-Minimization) optimizer [Foret et al., 2020].
Experiment Settings
Datasets. On the rumor detection task, we conduct experiments with the dataset TWITTER [Zubiaga et al., 2016], which contains five domains: "Cha.", "Ger.", "Fer.", "Ott.", and "Syd.". On the sentiment classification task, we conduct experiments with the dataset Amazon [Blitzer et al., 2007], which contains four domains: books, dvd, electronics, and kitchen. Preprocess and statistics on the TWITTER dataset and the Amazon dataset can be found in Appendix D. Although CST also involves Tsallis entropy, the entropyindex is a mannually set hyper-parameters and is not instance-adaptative. WIND is a meta-reweigting based domain adaptation approach that learns-to-learn suitable instance weights of different labeled samples in the source domain. More details about the baseline methods can be found in the references. Implementation Details. The base model on the Amazon dataset is BERT [Devlin et al., 2019] and the base model on the TWITTER dataset is BiGCN [Bian et al., 2020]. Domain adaptation experiments are conducted on every domain on the benchmark datasets. For every domain on the benchmark dataset, we seperately take them as the target domain and merges the rest domains as the source domain. For example, when the "books" domain in the Amazon dataset is taken as the target domain, the "dvd", "electronics" and "kitchen" domains are merged as the source domain. All unlabeled data from the target domain are involved in the training process, meanwhile the labeled data in the target domain are used for evaluation (with a ratio of 7:3). Since the TWITTER dataset does not contain extra unlabeled data, we take 70% of the labeled data on the target domain as the unlabeled data for training the model and preserve the rest ones for evaluation. The experiments on TWITTER are conducted on "Cha.", "Fer.", "Ott.", and "Syd." 7 . For the symbols in Algorithm 1, we set η t and β t with respect to Assumption 2.
General Results
We use all baseline approaches (including MTEM) to adapt BiGCN across domains on TWITTER, and to adapt BERT across domains on Amazon. We validate the effectiveness of the proposed MTEM on both unsupervised and semisupervised domain adaptation scenarios. For semi-supervised domain adaptation scenario, 100 labeled instances in the target domain are randomly selected as the in-domain dataset. As the rumor detection task mainly concerns the classification performance in the 'rumor' category, we use the F1 score to evaluate the performance on TWITTER. On the sentiment classification task, different classes are equally important. Thus, we use the accuracy score to evaluate different models, which is also convenient for comparison with previous studies. Experiment results are listed in Table 1, Table 2.
The results in Table 1 and Table 2 demonstrate the effectiveness of the proposed MTEM algorithm. In particular, MTEM outperforms all baseline approaches on all benchmark datasets. Compared with the self-training approaches, i.e., FixMatch and CST, MTEM maintains the superiority of an average of nearly 2 percent on the Amazon dataset and an average of 4 percent on the TWITTER set. Thus, regularizing the self-training process with an instance adaptative is beneficial. Moreover, MTEM also surpasses the meta-reweighting algorithm, i.e., WIND, by an average of nearly 2 percent on the Amazon dataset and nearly 18 percent on the TWITTER dataset. Thus, the meta-learning algorithm in MTEM, i.e., learning to learn the suitable entropy indexes, is a competitive candidate in the domain adaptation scenario.
Ablation Study
We separately remove the meta-learning module (-w/o M), the temperature scheduler (-w/o T), and the sampling mechanism (-w/o S) to observe the adaptation performance across domains on the benchmark datasets. -w/o M means all instances in the target domain will be allocated the same entropy index (determined with manually attempt). -w/o T means removing the temperature scheduler, and the temperature κ is fixed to be 1.0. -w/o S means to remove the sampling mechanism, i.e., generates pseudo labels with greedy strategies as previous self-training approaches. The experiments are conducted under the unsupervised domain adaptation scenarios. We validate the effectiveness with F1 score on TWITTER, and use the accuracy score on Amazon. The experiment results are listed in Tab. 3 and Tab. 4. From Tab. 3 and Tab. 4, we can find that all variants perform worse than MTEM on two benchmark datasets: (i) MTEM surpasses MTEM -w/o M on the Amazon dataset with an average of 2 percent, and on the TWITTER dataset with an average of 7 percent. Thus, allocating an instance adaptative entropy index for every unlabeled instance in the target domain is superior to allocating the same entropy index. Furthermore, since the unified entropy index in MTEMw/o M is searched manually, MTEM -w/o M should be better than Gibbs Entropy. Otherwise, the entropy index would be determined as 1.0 (Gibbs Entropy). Thus, the instance adaptive Tsallis-entropy in MTEM is better than Gibbs Entropy. (ii) MTEM surpasses MTEM -w/o S on the Amazon dataset with an average of 1.4 percent, and on the TWITTER dataset with an average of 1.5 percent, which is attributed to the sampling mechanism can directly correct the model's prediction errors. (iii) MTEM surpasses MTEM -w/o T with an average decrease of 0.9 percent on the TWITTER dataset, and with an average of 0.7 percent on the Amazon dataset, which is consistent with our claims that the annealing mechanism is beneficial to align the domains gradually.
Computation Cost
We conduct experiments on the Amazon dataset to compare the computation cost in the Taylor approximation and the original Second-order derivation. We separately count the time and the memory consumed in computing the gradient of the entropy indexes with different batch sizes. Experiments are deployed on an Nvidia Tesla V100 GPU. From Fig. 3, the time cost in the Second-order derivation is almost two times higher than in the Taylor approximation, and the memory cost in the Second-order derivation is 3-4 times higher than in the Taylor approximation technique. We also count the different performances of adapting the BERT model to the 'kitchen' domain with respect to different batch sizes. The experiment results are listed in Tab. 5, where '/' means the memory cost is out of the device's capacity. From Tab. 5, we can observe that the Taylor Approximation technique keeps a similar performance with the Second-order derivation. What's more, the best performance is achieved by using a batch with more than 50 instances (the setting in § 5.2), which would exceed the memory capacity if we use the Second-order derivation. Thus, the benefit of reducing the computation cost is apparent, as a larger batch size leads to better adaptation performance. i like and dislike these bowls, what i like about them is the shape and size for certain foods and for the dish--washer. they are too small for cereal if you would like to add fruit to your cereal, perfect for oatmeal or ice cream but too small for soup or stew.
Case Study
In Tab. 6, we present two cases with different entropy indexes learned in the meta-learning process (more cases are provided in Appendix. E). Experiments are conducted on sentiment classification tasks, and the settings are the same as 'kitchen' in § 5.2. On the sentences with smaller entropy index (updating the model aggressively), the sentiment words are more transferrable across domains in e-commerce , e.g., 'cheaper' and 'free shipping'. Otherwise, sentences with larger entropy index contain more domain discriminative words, e.g., the ngram 'but too small for soup or stew' in the kitchen domain are less relevant to the other domains (electronics, books, dvd). In this case, MTEM uses a large entropy index to update the model more cautiously.
Conclusion
This paper proposes a new meta-learning algorithm for domain adaptation on text classification, namely MTEM. Inheriting the principle of entropy minimization, MTEM imposes an instance adaptative Tsallis entropy minimization process on the target domain, and such a process is formulated as a meta-learning process. To reduce the computation cost, we propose a Taylor approximation technique to compute the gradient of the entropy indexes. Also, we propose an annealing sampling mechanism to generate pseudo labels. In addition, we analyze the proposed MTEM theoretically, i.e., we prove the convergence of the meta-learning algorithm in optimizing the instance-adaptative entropy and provide insights for understanding why MTEM is effective in achieving domain adaptation. Extensive experiments on two popular models, BiGCN and BERT, verify the effectiveness of MTEM.
References [Benson, 1995] Assumption 1. For a model with parameters θ (i.e., f (x; θ)), we assume that: (i) the model's prediction probability on every dimension is larger than 0; (ii) the model's gradient back-propagated from its prediction is bounded by f, i.e.,
A Proofs
Assumption 1 is easy to be satisfied. Condition (i) and (ii) can be met technically, e.g., clipping the values that are too small or too large in model's prediction and gradients. Condition (iii) requires all operations involved in the model should be smooth. To our knowledge, existing pretrained language models satisfy such requirements.
Assumption 2. The learning rate η t (line 10 of Algorithm 1) satisfies η t = min{1, k1 t } for some k 1 > 0, where k1 t < 1. In addition, The learning rate β t (line 8 of Algorithm 1) is a monotone descent sequence and β t = min{ 1 L , k2 A.1 Proof for Theorem 1 . Then, we can derive the gradient of o on g(o) as below: Proof. According to the definition of ℓ ψ [i] in Eq. (3), Since We obtain the gradient of θ with respect to L T (θ, ψ|D u T ) (abbreviated as L T ) as below: Then, we derive the gradient of ψ [i] with respect to ▽ ψ [i] L T and obtain Eq. (28). When we further derive the gradient of ψ [i] with respect to ∂ 2 L T ∂θ∂ψ [i] , we can obtain Eq. (29).
Asỹ i is a one-hot vector, we only need to analyze the nonzero element inỹ i ⊙ log 2 (f (x i ; θ)) ⊙ f ψ [i] −2 (x i ; θ). Let the index of the non-zero element be j, then Eq. (32) holds.
According to Lemma A, Eq. (32) is bounded, || is a bounded term. Similar analysis can be conducted on Eq. (30), and the result is that || is also bounded. Since entropy index is instance adaptive, ψ [i] is independent to ψ [j̸ =i] . Thus, || ∂ 2 L T ∂θ∂ψ || and || ∂ 3 L T ∂θ∂ψ 2 || are bounded Lemma 1. Suppose the operations in the base model is Lipschitz smooth, then ℓ ψ [i] (f (x i , θ),ỹ i ) is Lipschitz smooth with respect to θ for ∀ψ [i] > 1 and ∀x i ∈ D S D u T , i.e., there exists a finite constant ρ 1 and a finite constant L 1 that satisfy: , i.e., there exists a finite constant ρ 2 and a finite constant L 2 that satisfy: Proof. With the definition of Tsallis loss in Eq.
(3), we have: where z is the index of the non-zero element in one-hotỹ i . According to Assumption 1, f [z] (x i ; θ)) is constrained in (x i ; θ)) is thus bounded. Therefore, there exists a finit constant ρ 1 such that || Since || is also bounded, which means that there exists a finite constant L 1 that satisfies Since which implies that there exists a finit constant ρ 2 that satisfies With similar efforts, we write ∇ 2 As illustrated above, | 1 With a similar analysis in Eq. (46), we conclude: Thus, there exists a finit constant L 2 that satisfies Lemma C. The entropy indexes ψ is Lipschitz continuous with constant ρ v , and Lipschitz smooth with constant L v to the loss L S (θ(ψ)|D S ). Formally, Proof. The gradients of ψ with respect to meta loss are written as: According to Lemma B, ∂ 2 L T ∂θ∂ψ [i] and ∂ 3 L T ∂θ∂ψ 2 i are bounded.
Here, we let ∂ 2 L T ∂ψ∂ψ [i] be bounded by ϱ and ∂ 2 L T ∂ψ∂ 2 ψ [i] be bounded by B. Thus, we have the following inequality: As ρ 1 and ϱ are finite constants, we know that there exists a finit constant ρ v that satisfies || ∂L S (θ(ψ)|D S ) ∂ψ || 2 ≤ ρ v . Further, we observe that: Thus, we have: As L 1 , ρ 1 , B and ϱ are finite constants, there exists a finit constant L v that satisfies || ∂ 2 L S (θ(ψ)|D S ) Theorem 1. The training process in MTEM can achieve where C is an independent constant.
Proof. The update of ψ in each iteration is as follows: In our implementation, we sample validation batch V from D S and replace Eq. (64) with Eq. (65), as shown in Algorithm 1.
In the following proof, we abbreviate L S (θ t (ψ t )|D S ) as L S (θ t (ψ t )), and abbreviate L S (θ t (ψ t )|V) asL S (θ t (ψ t )). Since the validation batch V is uniformly from the entire data set D S , we rewrite the update as: where ξ t = ∇ ψLS (θ t (ψ t ))−∇ ψ L S (θ t (ψ t )) are i.i.d random variable with finite variance σ S . Furthermore, E[ξ t ] = 0, as V are drawn uniformly at random. Observe that By Lipschitz smoothness of θ to L S (θ|D S ), we have Eq. (70) is obtained according to line 8 in Algorithm 1, Eq. (71) is due to Lemma 1. Due to the Lipschitz continuity of L S (θ t (ψ)) (mentioned in Lemma 1), we can obtain the following: Thus, Eq. (67) satisfies: Rearranging the terms, we can obtain: Summing up the above inequalities and rearranging the terms, we can obtain: Thus, by taking expectations with respect to ξ t on both sides, we obtain: The third inequlity holds for T ) in T steps, and this finishes our proof of Theorem 1.
A.2 Proof for Theorem 2 Lemma D. (Lemma 2 in [Shu et al., 2019]) Let (a n ) n ≤ 1 , (b n ) n ≤ 1 be two non-negative real sequences such that the series ∞ t=1 a n diverges, the series ∞ t=1 a n b n converges, and there exists ν > 0 such that |b n+1 − b n | ≤ νa n . Then the sequences (b n ) n ≤ 1 converges to 0. Theorem 2. With the training process in MTEM, the instance adaptive Tsallis entropy is guaranteed to be converged on unlabeled data. Formally, Proof. With the assumption for the learning rate η t and β t , we can conclude that η t satisfies ∞ t=0 η t = ∞ and ∞ t=0 η 2 t < ∞, β t satisfies ∞ t=0 β t = ∞ and ∞ t=0 β 2 t < ∞. We abbreviate L T (θ, ψ|D u T ) as L T (θ, ψ), L T (θ, ψ|B) asL T (θ, ψ), where B is a training batch sampled uniformly from D u T , as shown in Algorithm 1. Then, each update step is written below: where For the first term, For the second term, Therefore, we have: Taking expectation of both sides of Eq. (99) and since E[ξ t ] = 0,E[Υ t ] = 0, we have: Summing up the above inequalities over t = 1, ..., ∞ in both sides, we obtain: With Lemma D and ∞ t=0 η t = ∞, it is easy to deduce that lim t→∞ E[∇ θ L T (θ t ; ψ t+1 )∥ 2 2 ] = 0 holds when: for some constant ν. Due to the Cauchy inequality: We then have: Observe that: Thus, we have: According to Lemma D, we can achieve:
A.3 Proof for Theorem 3
We use hypothesis h : X → ∆ K−1 to analyze the effectiveness of MTEM in achieving domain adaptation. For- ) denote the empirical Rademacher complexity [Gnecco et al., 2008] of function class H (h ∈ H) on dataset D, where σ i are independent random noise drawn from the Rademacher distribution i.e. P r(σ i = +1) = P r(σ i = −1) = 1/2. Then, we deduce Theorem 3. Definition 1. (q, c)-constant expansion Let P rob(D) denote the distribution of the dataset D, P rob i (D) denote the conditional distribution given label i . For some constant q, c ∈ (0, 1), if for any set D ∈ D S ∪ D T and ∀i ∈ [K] with 1 2 > P rob i (D) > q, we have P rob i (N (D)/ D) > min{c, P rob i (D)} Theorem 3. Suppose D S and D u T satisfy (q, c)− constant expansion [Wei et al., 2021] for some constant q, c ∈ (0, 1). With the probability at least 1 − δ over the drawing of D u T from D T , the error rates of the model h θ (h ∈ H) on the target domain (i.e., ϵ D T (h θ )) is bounded by: where z = arg max k {y} andz = arg max k {ỹ} are the index of the non-zero element in one-hot label vectors y andỹ. Considering that : ≤ = 2 × ℓ 2 (f (x; θ), y) it is natural to conclude that: Also, we conclude that: Thus, B More Details about Tsallis Entropy
B.1 Variants of Tsallis Entropy and Tsallis Loss
By adjusting the entropy index, we can obtain different kind of entropy types. For example, making the entropy index approaching to 1, we can obtain the Gibbs entropy. And α = 2 recovers the Gini impurity.
Also, adjust entropy index can generate different Tsallis losses. Especially, making the entropy index approach 1.0 recovers the Cross-entropy, as shown below: B.2 Deduction of Eq. (9) We denote f (x i ; θ) as f i , whose k-th element is denoted as f i [k] . Also, we denoteỹ T i [k] as an one-hot vector whose nonzero element is with the index of k. Thus, we have:
C Implementation Details
For the symbols in Algorithm 1, we set η t and β t with respect to Assumption 2. We set η t in Algorithm 1 as 5e − 5 for the BERT model, and 5e − 3 for the BiGCN model. In addition, ψ is initialized with 2.0, and the learning weight to update the entropy indexs, i.e., β t in Algorithm. 1, is initialized with 0.1 for both the BERT and the BiGCN model. We conduct all experiments the GeForce RTX 3090 GPU with 24GB memory.
D Statistics of the Datasets
TWITTER dataset is provided in the site 8 under a CC-BY license. Amazon dataset is accessed from https://github.com/ruidan/DAS. The statistics of the TWIT-TER dataset and the Amazon dataset is listed in Table 8 and Table 9.
E Cases with Different Entropy Indexes
Cases are drawn from the Amazon dataset. In each sentences, we highlight the sentiment words with red colors, and highlight their label with blue color. | 2023-08-08T06:43:03.008Z | 2023-08-01T00:00:00.000 | {
"year": 2023,
"sha1": "d4b807bf9b72e4216b1580c286cc18f641987bf2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d4b807bf9b72e4216b1580c286cc18f641987bf2",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
4545824 | pes2o/s2orc | v3-fos-license | Inhibition of intracellular lipolysis promotes human cancer cell adaptation to hypoxia
Tumor tissues are chronically exposed to hypoxia owing to aberrant vascularity. Lipid droplet (LD) accumulation is a hallmark of hypoxic cancer cells, yet how LDs form and function during hypoxia remains poorly understood. Herein, we report that in various cancer cells upon oxygen deprivation, HIF-1 activation down-modulates LD catabolism mediated by adipose triglyceride lipase (ATGL), the key enzyme for intracellular lipolysis. Proteomics and functional analyses identified hypoxia-inducible gene 2 (HIG2), a HIF-1 target, as a new inhibitor of ATGL. Knockout of HIG2 enhanced LD breakdown and fatty acid (FA) oxidation, leading to increased ROS production and apoptosis in hypoxic cancer cells as well as impaired growth of tumor xenografts. All of these effects were reversed by co-ablation of ATGL. Thus, by inhibiting ATGL, HIG2 acts downstream of HIF-1 to sequester FAs in LDs away from the mitochondrial pathways for oxidation and ROS generation, thereby sustaining cancer cell survival in hypoxia.
Introduction
Lipid droplets (LDs) are key organelles responsible for storing cellular surplus of fatty acids (FAs) in esterified forms such as triglycerides (TGs) and sterol esters (Wilfling et al., 2014). While TG-LDs likely form through de novo synthesis of TGs as a lens within the ER bilayer, TG catabolism/lipolysis is catalyzed by LD-localized adipose triglyceride lipase (ATGL) (Zechner et al., 2012). ATGL is the rate-limiting intracellular TG hydrolase in various cell and tissue types, and its activity in vivo is modulated by coactivator Comparative Gene Identification 58 (CGI-58) and inhibitor G0/G1 Switch Gene 2 (G0S2) (Lass et al., 2006;Yang et al., 2010). The enzymatic action of ATGL channels hydrolyzed FAs to mitochondria for b-oxidation as well as to the synthesis of lipid ligands for PPARa, whose activation in turn leads to enhanced mitochondrial biogenesis and function. In normal oxidative cell types such as hepatocytes, brown adipocytes and cardiomyocytes, loss of ATGL is known to cause accumulation of TG-LDs and impairment of mitochondrial oxidative capacity (Ahmadian et al., 2011;Haemmerle et al., 2011;Ong et al., 2011).
Recently, emerging evidence also points to a novel tumor suppressive role for ATGL. For example, whole-body ablation of ATGL in mice induces spontaneous pulmonary neoplasia (Al-Zoughbi et al., 2016). In addition, adipose-specific knockout of ATGL together with HSL causes liposarcoma , and intestine-specific disruption of the ATGL co-activator CGI-58 promotes colorectal tumorigenesis (Ou et al., 2014). Collectively, these new findings implicate the possibility that inhibition of ATGL-mediated lipolysis may facilitate cancer development. However, the pathophysiological context and the molecular pathways that regulate ATGL in cancer are still unknown.
During the growth of solid tumors, hypoxic regions often arise when the rate of tumor cell proliferation exceeds that of angiogenesis (Jain, 1988). Hypoxia is a potent microenvironmental factor promoting aggressive malignancy, and is known for its association with poor survival in a variety of tumor types (Hanahan and Weinberg, 2011;Masson and Ratcliffe, 2014;Rankin and Giaccia, 2016). In response to oxygen deprivation, hypoxia inducible factors (HIFs) mediate multiple protective mechanisms, which together act to maintain oxygen homeostasis through reducing oxidative metabolism and oxygen consumption (Gordan and Simon, 2007;Papandreou et al., 2006;Rankin and Giaccia, 2008;Semenza, 2010).
To date, the best-characterized metabolic adaptation is the HIF-1-mediated switch from glucose oxidation to glycolysis for energy production (Masson and Ratcliffe, 2014;Nakazawa et al., 2016). In comparison, little is known regarding the regulation of intracellular fatty acid (FA) availability and oxidation in hypoxic cancer cells. Since enhanced fatty acid oxidation (FAO) and hypoxia both promote mitochondrial generation of reactive oxygen species (ROS) (Bleier and Drö se, 2013;Guzy et al., 2005;Schö nfeld and Wojtczak, 2008), one conceivable mechanism for hypoxic cells to prevent oxidative stress is to channel free FAs to TG-LDs for storage. Indeed, increased accumulation of TG-LDs is now being recognized as a hallmark of hypoxic cancer cells of various origins (Koizume and Miyagi, 2016). Evidence derived from studies of cancer, glial and neural stem cells further implies that the capacity to accumulate LDs is positively linked to the ability of cells to survive oxidative stress in hypoxia (Bailey et al., 2015;Bensaad et al., 2014;Liu et al., 2015). However, establishing a definitive relationship would require a better understanding of the molecular mechanisms that govern hypoxia-induced LD formation.
In the present study, we have obtained compelling evidence to show that hypoxia causes profound inhibition of ATGL-mediated lipolysis in cancer cells. By using an unbiased proteomics screen and functional analyses, we identified a small protein encoded by Hypoxia-Inducible Gene 2 (HIG2) as a novel endogenous inhibitor of ATGL and as being solely responsible for mediating lipolytic inhibition in hypoxia. Our results further demonstrate that through inhibiting ATGL and mitochondrial FA oxidation, HIG2 acts downstream of HIF-1 to promote LD accumulation, attenuate ROS production, and enhance cancer cell survival in hypoxia.
Intracellular lipolysis is reduced in hypoxic cancer cells
To determine whether lipolytic changes contribute to LD accumulation in hypoxia, we measured free FA release as an index of intracellular lipolysis in various human colorectal cancer (CRC) and renal cell carcinoma (RCC) cell lines under different oxygenated conditions. In ACHN (RCC), Caki-1 (RCC), DLD-1 (CRC) and HCT116 (CRC) cells, a 24 hr hypoxic treatment (0.5% O 2 ) resulted in release of significantly lower levels of FA (2.5-3-fold) relative to the normoxic condition (20% O 2 ) ( Figure 1A). In response to hypoxia, the reduction in FA release was accompanied by an increased TG accumulation (2-3-fold) ( Figure 1B). Genetic disruption of ATGL using CRISPR/Cas9 method, though profoundly decreased FA release in normoxic cells, failed to further reduce FA efflux in hypoxic HCT-116 cells ( Figure 1C). In addition, hypoxia elicited no considerable changes in the expression of ATGL ( Figure 1D and G) or its co-activator CGI-58 (Lass et al., 2006) (Figure 1E and G) and inhibitor G0S2 (Yang et al., 2010) ( Figure 1F). These results indicate that ATGL-mediated lipolysis is suppressed in hypoxia via a mechanism independent of expressional regulation.
HIG2 is identified as an ATGL interacting protein
To search for potential protein regulator(s) of ATGL, we expressed FLAG-ATGL in HCT116 cells and performed anti-FLAG immunoprecipitation after hypoxic treatment. Following resolution of coimmunoprecipitated proteins by SDS-PAGE (Figure 2A), Mass Spectrometry analysis identified one potential ATGL-binding partner as HIG2 ( Figure 2B and C), a 63-amino acid (~7 kDa) protein After 24 hr of incubation under normoxia or hypoxia, lipolysis (A) and cellular TG content (B) were measured. n = 4 biologically independent experiments. **p<0.01, ***p<0.001 vs. Normoxia. (C) Lipolysis in HCT116 clone cells after 24 hr of incubation under normoxia or hypoxia. ATGL knockout (KO) cells were generated by CRSPR/Cas9 method. n = 3 biologically independent experiments. ***p<0.001 vs. Normoxia WT. (D-F) mRNA levels of genes related to lipolysis in cells after 24 hr of incubation under normoxia or hypoxia. n = 4 biologically independent experiments. **p<0.01 vs. Normoxia. (G) Protein levels of ATGL and CGI58 in cells after 24 hr of incubation under normoxia or hypoxia were determined by immunoblotting with anti-ATGL and anti-CGI58 antibodies, respectively. Graphs represent mean ±SD, and were compared by two-tailed unpaired Student t-test. DOI: https://doi.org/10.7554/eLife.31132.002 were detected by immunoblotting with anti-HIG2 and anti-FLAG antibodies, respectively. (F) HeLa cells were co-transfected with Myc-ATGL vector (Nterminal Myc tag) along with vector alone, HIG2-FLAG WT or HIG2-FLAG D7-11 vector (C-terminal FLAG tag). Immunoprecipitation was performed by anti-FLAG gels. HIG2 and ATGL proteins were detected by anti-FLAG and anti-Myc antibodies, respectively. (G) HeLa cells were co-transfected with HIG2-FLAG vector along with vector alone, Myc-ATGL or mutant vectors. ATGLDPT and ATGLDHD are two internal deletion mutants that lack the patatin domain (residues 10-178) and the hydrophobic domain (residues 259-337), respectively. Immunoprecipitation was performed by anti-Myc gels. HIG2 and ATGL proteins were detected by anti-FLAG and anti-Myc antibodies, respectively. DOI: https://doi.org/10.7554/eLife.31132.003 encoded by the Hypoxia Inducible Lipid Droplet Associated (Hilpda) gene (Gimm et al., 2010). Interestingly, successive sequence alignment revealed that HIG2 contains a hydrophobic domain (HD) highly similar to the ATGL inhibitory domain of G0S2 (Cerk et al., 2014;Yang et al., 2010) ( Figure 2D), raising the possibility of HIG2 being a novel ATGL inhibitor. Immunoblotting analysis provided the first piece of evidence that verified coimmunoprecipitation of ATGL with endogenous HIG2 ( Figure 2E). The interaction between ATGL and HIG2 was further confirmed in HeLa cells (a human UCC cell line), when the two proteins were coexpressed ( Figure 2F). Deletion of LY(V/L)LG (D7-11), a motif conserved between the HDs of HIG2 and G0S2, completely eliminated the ability of HIG2 to bind ATGL ( Figure 2F). Moreover, ATGLDPT and ATGLDHD are two internal deletion mutants that lack the catalytic patatin-like domain (residues 10-178) and the LD-localizing hydrophobic domain (residues 259-337), respectively. As shown in Figure 2G, HIG2 co-immunoprecipitated with wild type ATGL and ATGLDHD, while ATGLDPT mutant exhibited no interaction with HIG2 ( Figure 2G). Therefore, like G0S2 (Yang et al., 2010), HIG2 binds to the patatin-like catalytic domain of ATGL.
HIG2 inhibits the TG hydrolase activity of ATGL
To determine if HIG2 regulates ATGL enzymatic activity, cell extracts of HeLa cells overexpressing ATGL were used as a source of ATGL in a TG hydrolase activity assay. As shown in Figure 3A and B, addition of in vitro translated HIG2 protein inhibited the activity of human and mouse ATGL by 80% and 85%, respectively. Similar extent of inhibition was observed when we included in the reaction either recombinant HIG2 (His-MBP-HIG2) purified from E. coli ( Figure 3C and D) or HIG2-containing HeLa cell extracts ( Figure 3E). HIG2 appears to be selective for ATGL, as it was unable to affect the TG hydrolase activity of hormone-sensitive lipase (HSL) ( Figure 3F).
Immunofluorescence microscopy revealed that intracellular LD degradation mediated by ATGL was also effectively blocked by HIG2. As revealed by staining with BODIPY 493/503, a nonpolar probe selective for neural lipids such as TG, HeLa cells transfected with Myc-ATGL alone exhibited a marked reduction in both size and number of LDs upon oleic acid loading when compared with the adjacent untransfected cells ( Figure 3G). However, co-expression of HIG2-FLAG was able to reverse this effect of Myc-ATGL. Consequently, HIG2-FLAG and Myc-ATGL were found to be co-localized at the surface of LDs. Furthermore, HIG2D7-11, a mutant deficient in interacting with ATGL, was incapable of preventing ATGL-induced LD degradation ( Figure 3G), indicating the requirement of a direct interaction for ATGL inhibition.
Lipolytic inhibition by HIG2 contributes to TG-LD accumulation and cancer cell survival under hypoxia
Early investigations found that overexpression of HIG2 promotes LD accumulation in various cell types (DiStefano et al., 2015;Gimm et al., 2010). Recently, a study using a conditional knockout mouse model showed that HIG2 mediates neutral lipid accumulation in macrophages and contributes to atherosclerosis in apolipoprotein E-deficient background (Maier et al., 2017). To determine whether inhibition of ATGL constitutes a major underlying mechanism, we used CRISPR/Cas9 method to disrupt HIG2 in HCT116, DLD-1, ACHN and HeLa cells. In cells from the control clones of all four cell lines, hypoxia dramatically induced HIG2 expression along with TG deposition (Figure 4-figure supplement 1A-F). While deletion of HIG2 mildly decreased the low levels of TG in normoxia, the ability to accumulate TG in hypoxia was uniformly lost in cells from the HIG2 knockout (KO) clones generated by using two independent gRNA (Figure 4-figure supplement 1C-F). Strikingly, co-disruption of ATGL was able to restore hypoxia-induced TG and LD accumulation in HIG2 KO cells ( Figure 4A-C). Similar effect was achieved in the HIG2-deficient HCT116 cells by the overexpression of wild type HIG2 but not HIG2D7-11, the mutant disabled in ATGL interaction and inhibition ( Figure 4D). Additionally, deletion of HIG2 alone in hypoxic cells led to a 7-fold increase in the lipolytic rate, which was completely reversed upon co-deletion of ATGL ( Figure 4E). Taken together, these results provide proof that HIG2 acts to enhance intracellular TG levels through inhibiting ATGL-catalyzed lipolysis.
One of the most notable changes elicited by HIG2 deficiency was the decreased number of viable cells when hypoxia was prolonged. While the growth of the wild type cells was generally decreased in hypoxia, disruption of HIG2 or/and ATGL incurred no further changes within a 24 hr period of Figure 5A and B). However, when hypoxia was extended to 48 hr, loss of HIG2 caused a significant reduction in the number of viable cells ( Figure 5B) as well as a marked increase in apoptotic cell death, as evidenced by the robust appearance of cleaved PARP and Caspase-3 proteins ( Figure 5C; Figure 5-figure supplement 1A-E). Fluorescence-activated cell sorting (FACS) analysis further demonstrated that HIG2 disruption increased the rate of apoptosis from 3.73% to 14.6% after extended hypoxia ( Figure 5D; Figure 5-figure supplement 1F). In HIG2 KO cells, deletion of ATGL increased the number of viable cells comparable to that of wild type cells ( Figure 5B). Loss of ATGL also completely rescinded the cleavage of PARP and Caspase-3 ( Figure 5C; Figure 5-figure supplement 1E) as well as substantially reduced the number of apoptotic cells ( Figure 5D; Figure 5-figure supplement 1F). In addition, apoptosis induced by HIG2 deficiency could be recued by the overexpression of wild type HIG2 but not HIG2D7-11 as revealed by immunoblotting ( Figure 5E) and FACS analysis ( Figure 5F). Therefore, through inhibiting ATGL, HIG2 plays an essential role in protecting against apoptosis and thus sustaining cell survival during extended hypoxia.
Lipolytic inhibition promotes hypoxic cell survival through reducing PPARa activity, FAO and ROS production ATGL is known to be a key regulator of PPARa activation and mitochondrial FA oxidation (FAO) in normal oxidative cell types (Zechner et al., 2012). In normoxic HCT116 cells that express low levels of HIG2 protein, deletion of ATGL or/and HIG2 caused no significant differences in the mRNA levels of Ppara and its target genes for FAO including Cpt1a, Cpt1b, Vlcad, Acaa2 and Mcad ( Figure 6A) or the rates of FAO as measured by the rate of the production of radiolabeled H 2 O from radiolabeled oleic acid ( Figure 6B). In response to hypoxia, the wild type and ATGL KO cells displayed a pronounced decrease in both the rates of FAO and the expression of PPARa and its target genes ( Figure 6A and B). By contrast, hypoxic HIG2 KO cells largely maintained the expression of FAO genes and levels of FAO. These effects were consistent regardless of whether radiolabeled oleic acid was added to the cells during hypoxia or intracellular TG was pre-labeled in normoxia prior to the cells being exposed to hypoxia (Figure 6-figure supplement 3A). Co-deletion of ATGL was able to rescue these effects of HIG2 deficiency ( Figure 6A and B), arguing that HIG2-mediated ATGL inhibition, instead of the decreased oxygen supply, is primarily responsible for the diminished FAO in hypoxia. Interestingly, loss of HIG2 does not appear to affect glycolytic phenotypes as hypoxia induced similar increases of glucose consumption and lactate production in wild type and HIG2 KO cells ( Figure 6-figure supplement 1A-D). Thus, the inhibition of FA mobilization by HIG2 does not appear to impact glycolytic phenotypes in hypoxic cancer cells.
Enhanced FAO and hypoxia both promote mitochondrial generation of ROS (Bleier and Drö se, 2013; Guzy et al., 2005;Schö nfeld and Wojtczak, 2008). We speculated that in hypoxic HIG2 KO cells, increased FAO and low oxygen conditions would synergistically cause excessive ROS production. In support of this hypothesis, HIG2 KO cells exhibited a near 250% increase of intracellular ROS levels in hypoxia as compared to normoxia ( Figure 6C; Figure 6-figure supplement 2). This is in contrast to the wild type, ATGL KO and HIG2/ATGL double knockout (dKO) cells, all of which only experienced a~120% increase in ROS production in hypoxia ( Figure 6C; Figure 6-figure supplement 2). Most importantly, treatment of hypoxic HIG2 KO cells with anti-oxidant N-acetyl cysteine (NAC) dose-dependently inhibited cleavage of PARP and Caspase-3 ( Figure 6D) and cell labeling by Annexin V ( Figure 6E). Interestingly, the PPARa antagonist, GW6471, also inhibited both ROS production and apoptosis ( Figure 6F and G). Treatment of cells with Ranolazine or Trimetazidine, two different pharmaceutical inhibitors of FAO, similarly blocked apoptosis in HIG2 KO cells ( Figure 6H- Figure 3 continued activity was determined. n = 4 biologically independent experiments. n = 2 biologically independent experiments. *p<0.05 vs. mATGL +His MBP. (E, F) Lysate from HeLa cells transfected with HIG2 vectors was added to extracts of HeLa cells expressing mATGL (E) or mHSL (F), and TG hydrolase activity was measured. n = 4 biologically independent experiments. ***p<0.001 vs. mATGL +Vector. (G) HeLa cells transfected with Myc-ATGL vector in the absence or presence of HIG2-FLAG WT or HIG2-FLAG D7-11 vector were incubated with 200 mM of oleate complexed to BSA overnight followed by immunofluorescence staining. Lipid droplets were stained by BODIPY 493/503 (green). Graphs represent mean ±SD, and were compared by two-tailed unpaired Student t-test. Effects of HIF-1 on lipid metabolism and cell survival are dependent on lipolytic inhibition Next, we determined whether HIF-1 acts upstream of HIG2 in initiating the pathway that leads to the inhibition of ATGL-mediated lipolysis and FAO. In line with HIG2 as a target gene of HIF-1, knockdown of HIF-1a using a specific siRNA oligo caused a substantial decrease in HIG2 expression induced by hypoxia (
Lipolytic inhibition is critical for tumor growth in vivo.
To determine the in vivo role of lipolytic inhibition mediated by HIG2, we injected wild type, ATGL KO, HIG2 KO, and HIG2/ATGL dKO HCT116 cells subcutaneously into nude mice to generate tumor xenografts. Deletion of HIG2 resulted in a profound delay in tumor growth as compared to the wild type control group ( Figure 7A). In particular, we observed that tumors in the wild type group reached volumes of~1100 mm 3 (>600 mg in weight) after only 25 days, whereas tumor volumes in the HIG2 KO group were only~180 mm3 (<100 mg in weight) ( Figure 7B and C). Histological analysis of tissue sections revealed a substantially reduced accumulation of neutral lipids, increased cleavage of Caspase-3 and increased staining of the lipid peroxidation marker 4-HNE in the HIG2 KO tumors (
The lipolytic pathway is downregulated in human cancers
To determine if the lipolytic pathway is affected in human tumors, we analyzed solid tumor data sets in the TCGA (The Cancer Genome Atlas). Consistent with our RCC and CRC cell lines, we found that HIG2 mRNA abundance is strongly associated with various solid tumors including kidney renal clear cell carcinoma ( protein in vivo, we compared levels of HIG2 in 19 RCC samples matched by Fuhrman grade (grade 2) and by adjacent uninvolved kidney tissue. HIG2 protein was highly expressed in RCC tissues but hardly detectable in the matched adjacent normal kidney tissue, coinciding with the HIF-1a protein levels and tissue TG content ( Figure 8B; Figure 8-figure supplement 2). By contrast, no significant differences were detected in the protein expression of ATGL or CGI-58 ( Figure 8B). These observations indicate that the lipolytic inhibition mediated by the upregulation of HIG2 is a relevant mechanism for cancer pathophysiology in humans.
Discussion
Understanding how cancer cells become adapted to hypoxia is central to understanding how hypoxia promotes tumor progression and malignancy. One compelling idea is that metabolic adaptations driven by HIF-1 confer a selective advantage for cancer cells in the low oxygen environment. It had been recognized previously that through enhanced HIF-1 activity, cancer cells increase their TG-LD accumulation in response to oxygen deprivation (Bensaad et al., 2014;Koizume and Miyagi, 2016). Additionally, HIF-1 activation downregulates mitochondrial oxidative capacity (Masson and Ratcliffe, 2014;Zhang et al., 2007), by which it helps to reduce oxygen consumption and maintain oxygen homeostasis in hypoxia. However, the earlier studies did not precisely define how HIF-1 functions to facilitate these two metabolic alterations. In this regard, the present study has uncovered a major unifying mechanism by demonstrating that inhibition of ATGL-mediated lipolysis by HIG2 contributes to LD storage and attenuated mitochondrial FA oxidation under hypoxia. The first hint of the HIG2 protein function came from the sequence alignment that revealed a homology between the HIG2 HD and the ATGL inhibitory domain of G0S2. Biochemical and cell biological analyses subsequently confirmed that like G0S2, HIG2 specifically inhibits the TG hydrolase activity of ATGL. Decreased TG hydrolysis results from a specific interaction, since deletion of the LY (V/L)LG motif conserved between the HDs of HIG2 and G0S2 abolished both ATGL interaction and inhibition. Recently, Cerk et al. demonstrated that a peptide derived from the G0S2 HD containing the LY(V/L)LG motif is capable of inhibiting ATGL in a dose dependent, non-competitive manner (Cerk et al., 2014). It is highly conceivable that HIG2 inhibits ATGL via a similar biochemical mechanism. In addition, HIG2-ablated hepatocytes were previously shown to exhibit increased TG turnover under normoxic conditions (DiStefano et al., 2015). However, knockout mouse studies conducted by the same group and Dijk et al. later yielded results arguing against a direct involvement of HIG2 in lipolysis (Dijk et al., 2017;DiStefano et al., 2016). The reason for these discrepancies is currently unknown. We speculate that the lack of hypoxia in the employed experimental settings, under which endogenous HIG2 might be expressed at low levels and thus possess a relatively insubstantial role, may contribute to the absence of significant changes caused by HIG2 ablation.
Knockout of HIG2 increased lipolysis and decreased TG levels in hypoxic cancer cells. Co-ablation of ATGL was able to rescue these phenotypes of HIG2 deficiency, suggesting that the specific inhibition of ATGL is functionally important for the adaptive LD accumulation downstream of HIG2 expression. Moreover, early studies have shown that ATGL-mediated TG hydrolysis provides necessary lipid ligands for PPARa activation. Associated with a markedly diminished PPARa target gene expression, ATGL-deficient cells often exhibit severely disrupted mitochondrial oxidation of FAs (Ahmadian et al., 2011;Haemmerle et al., 2011;Ong et al., 2011). In agreement with these previous findings, our data demonstrate that in cancer cells, hypoxia-induced downregulation of FAO and reduction in expression of PPARa target genes both are dependent on HIG2. This dependency was completely lost upon ATGL ablation, again suggesting a prerequisite role for ATGL inhibition by HIG2. Inhibition of FAO and antagonism of PPARa both led to reduced ROS production and apoptotic induction in hypoxic HIG2 KO cells, indicating the activation of PPARa-dependent FAO as a major contributor to oxidative stress. PPARa is generally thought to limit lipotoxicity through upregulating FAO during excess FA availability. While sustained activation of FAO often results in increased generation of ROS in mitochondria, PPARa is known to promote the expression of various anti-oxidases such as catalase and superoxide dismutase (SOD) (Khoo et al., 2013;Liu et al., 2012). We speculate that in normoxia, mechanisms balancing ROS generation and degradation likely operate to maintain the steady-state redox environment. However, during hypoxia when oxygen is insufficiently supplied, a tilt toward excessive ROS production can occur as a result of increased electron leakage from the mitochondrial electron transport chain (ETC), leading to oxidative stress. A question arises as to why lipolytic inhibition in hypoxic cancer cells would be advantageous. Cancer cells require large amounts of lipids for the synthesis of cellular membranes to maintain high cell proliferation rates. However, lipotoxicity can occur at times when FA delivery to the cells exceeds FA oxidation rates. This may be especially important during the periods of hypoxia, when HIF-1 activation is known to induce FA uptake (Bensaad et al., 2014). On the other hand, the oxidation of FAs may need to be decelerated in hypoxia as it consumes significant amounts of oxygen, which can exacerbate oxygen insufficiency. More importantly, both hypoxia and excessive FA oxidation cause elevated mitochondrial ROS production (Bleier and Drö se, 2013; Guzy et al., 2005;Schö nfeld and Wojtczak, 2008). Elevated ROS levels lead to peroxidation of membrane lipids, denaturation of proteins and deactivation of enzymes, which together can lead to cell damage and apoptosis. Therefore, switch-off of FA oxidation combined with storage of excess FAs in TG-LDs through inhibition of lipolysis would constitute a conceivable strategy for cancer cells to prevent ROS overproduction and oxidative damage as well as evade lipotoxicity in hypoxia. In this regard, the present study provides compelling evidence that the HIF-1-HIG2 antilipolytic pathway is a central component of such a survival strategy. In hypoxia, restoration of lipolysis by ablation of either HIG2 or HIF-1a caused significant increases in FAO and ROS generation along with decreased cell viability. A causal relationship between ROS elevation and increased cell death is demonstrated by the fact that exogenously applied antioxidant NAC protected HIG2 KO cells from hypoxia-induced apoptosis. Furthermore, we found that overexpression of wild type HIG2 but not HIG2D7-11, the mutant deficient in ATGL inhibition, decreased ROS production and increased resistance to hypoxia-induced apoptosis. These results clearly establish that inhibition of ATGL-mediated lipolysis by HIG2 is downstream of and required for HIF-1 to elicit the protective effects in hypoxic cancer cells. Our results complement the existing model illustrating that HIF-1 represses glucose flux to the TCA cycle through mediating expression of PDK1, which inhibits pyruvate oxidation through phosphorylating and inactivating PDH (Kim et al., 2006). In cancer cells located within the poorly oxygenated regions of solid tumors, coordinate inhibition of ATGL and PDH by HIG2 and PDK1, respectively, should collectively lead to reduced mitochondrial oxidative metabolism and ROS production as well as improved tissue oxygen homeostasis ( Figure 8C). In addition, we observed that neither hypoxia-induced glucose consumption nor glycolysis was affected by HIG2 deletion. Although excessive mitochondrial FAO may impair glucose utilization via the classic Randle cycle, our results suggest that the acquisition of glycolytic phenotypes by hypoxic cancer cells is independent of the inhibition of FA mobilization by HIG2. We speculate that the main purpose of storing excessive FAs in TG-LDs in hypoxia likely is for prevention of the potential cytotoxicity that is associated with FAO-driven ROS generation.
Based on the data derived from the present study, we propose that HIG2 is a novel metabolic oncogenic factor, which exerts its function by neutralizing the tumor suppressive role of ATGL/CGI-58. Our observations that ATGL inhibition by HIG2 promotes hypoxic cancer cell survival in vitro and tumor growth in vivo are supportive of this hypothesis. By suggesting a critical role of lipolytic inhibition in hypoxic tumor areas, the present study provides justification for the development of specific chemical disruptors of HIG2-ATGL interaction. Such drugs presumably would be able to liberate ATGL and potentiate FAO-driven ROS production to toxic levels, resulting in apoptotic death of hypoxic cancer cells. Our findings that HIG2 is highly upregulated in multiple human solid tumors are in agreement of this concept. The HIF-1-HIG2 antilipolytic pathway represents a departure from the typical metabolic pathways that have been targeted therapeutically to deprive cells of necessary fuel/building blocks. Lastly, it is important to note that while the bioinformatics analysis revealed an upregulated expression of HIG2 in a variety of solid tumors, our mechanistic studies mainly employed HeLa, CRC and RCC cell lines. It is conceivable that the impact of HIG2 as a lipolytic inhibitor varies among different tumor types. In this regard, increased expression of LD coat protein Perilipin 2 downstream of HIF-2 was recently shown to promote lipid storage in clear cell RCC (Qiu et al., 2015). Together, the accumulating data have painted a complex and intricate picture in which cancer cells regulate their lipid accumulation in hypoxia.
Cell culture
HeLa cells were cultured in DMEM (Invitrogen) containing 10% heat-inactivated FBS (Invitrogen). HCT116, DLD-1 and Caki-1 cell were cultured in McCoy's 5A medium (Invitrogen) containing 10% FBS. ACHN cells were cultured in EMEM (ATCC) with 10% FBS. All media were also supplemented with 100 U ml À1 penicillin/streptomycin (Invitrogen). Normoxic cells (20% O 2 ) were maintained at 37˚C in a 5% CO 2 /95% air incubator. For hypoxic exposure, cell culture plates were placed in a hypoxia incubator (Eppendorf, USA) at 0.5% O 2 . All cell lines were obtained from American Type Culture Collection (ATCC). None of the cell lines used was found in the database of commonly misidentified cell lines that is maintained by ICLAC and NCBI Biosample. All cell lines were authenticated by STR profiling and tested to show no mycoplasma contamination.
PCR cloning of cDNA and site-directed mutagenesis
The complete open reading frame of human HIG2, human or mouse ATGL was cloned into pRK vector without any tags, pKF vector with a FLAG epitope tag, or pKM vector with a Myc epitope tag as described before (Yang et al., 2010). Deletion mutations were generated by using the QuickChange site-directed mutagenesis kit (Agilent) according to manufacturer's guidelines.
In vitro transcription/translation expression
In vitro transcription/translation was carried out by using TNT SP6 High-Yield Protein Expression System (Promega) according to the manufacturer's instructions. Specifically, reactions consisting of 30 mLTNT SP6 High-Yield Wheat Germ Master Mix and 5 mg vector DNA, made up to 50 mL with molecular biology grade water were incubated for 120 min at 25˚C. Then the reaction mixture was used for TG lipase activity. Production and purification of bacterially expressed proteins The human HIG2 or HIG2 7-11 was subcloned by standard PCR into pET His6 MBP vector (addgene, #29708) producing fusion protein with a His6-MBP tag at the Nterminal end. MBP tag can promote the solubility of target protein and His tag enables us easily purify the fusion protein by NI-NTA agarose beads. Plasmids were transformed into BL21 (DE3) E. coli (Agilent), grown in LB media while shaking at 37˚C to an OD600 value of 0.6. Protein expression was then induced by adding 1 mM IPTG (Sigma) to the LB cultures, which were shaking at 27˚C for another 4 hr. The cells were lysed by sonication/collagenase, and the purification was performed using NI-NTA agrose beads (Qiagen) according to the commercial protocol. The purified protein was then dialyzed overnight in the buffer containing 10 mM Tris-HCl, pH. 7.4, 150 mM NaCl and 1 mM EDTA, aliquoted and stored in À80˚C ready for use.
CRISPR/Cas9-mediated gene deletion pSpCas9 (BB)À2A-Puro (pX459) V2.0 was a gift from Feng Zhang (Addgene plasmid #62988). Insert oligonucleotides that include a gRNA sequence were designed using http://crispr.mit.edu/ as follows: for HIG2 deletion, guide1-GGGTCAGTACCA-CACCTAAC, and guide2-GTGTTGAACCTCTACCTGTT; for ATGL deletion, guide-GACCCCGG TGACCAGCGCCG. For co-deletion of HIG2 and ATGL, HIG2 guide one and ATGL guide were used. Cells were seeded in 6-well plates and the following day transfected with pX459 plasmids containing DNA specific to HIG2 and/or ATGL using Lipofectamine 2000. Cells were selected under puromycin (1.5 mg/ml) for 48 hr and plated onto 96-well plates. Screening for genetic modifications was performed by immunoblotting analysis. Mutations were confirmed by direct sequencing. HIG2 deficient clones derived from guide1 were used for experiments unless otherwise indicated.
Immunoprecipitation analysis
Cells were lysed in cell lysis buffer containing 50 mM Tris-HCl, pH 7.4, 150 mM NaCl, 1% Triton X-100, 1 mM DTT, and protease tablet inhibitors (1 tablet per 10 ml of buffer). Anti-Myc or anti-Flag M2-conjugated agarose gels were incubated with the lysates for 4 hr at 4˚C. The beads were then washed four times with lysis buffer, and the bound proteins were eluted in SDS buffer and analyzed by immunoblotting or mass spectrometry.
Immunoblotting analysis
Cells were lysed at 4˚C in a buffer containing 50 mM Tris-HCl (pH 7.4), 150 mM NaCl, 10 mM NaF, 1% Nonidet P-40, 0.1% SDS, 0.5% sodium deoxycholate, 1.0 mM EDTA, 10% glycerol, and protease tablet inhibitors (1 tablet per 10 ml of buffer). The lysates were clarified by centrifugation at 20,000 Â g, 4˚C for 10 min and then mixed with equal volume of 2 Â SDS sample buffer. Equivalent amounts of protein were resolved by SDSPAGE and transferred to nitrocellulose membranes. Individual proteins were blotted with primary antibodies at appropriate dilutions. Peroxide-conjugated secondary antibodies were incubated with the membrane at a dilution of 1:5000. The signals were then visualized using ECL substrate (Thermo Scientific).
Proteomic analysis
The immunoprecipitated samples were resolved by 10-20% SDS-PAGE gels and visualized by Coomassie Blue staining. Then, the gel portions were excised, de-stained, dehydrated, dried, and subjected to trypsin digestion. The resulting peptides were subjected to liquid chromatography (LC)-ESI-MS/MS analysis performed on a Thermo Scientific LTQ Orbitrap mass spectrometer at the Mayo Clinic Proteomics Core.
Immunofluorescence microscopy
Cells were seeded on glass coverslips placed in 6-well plates and transfected with 0.25 mg of each DNA construct using Lipofectamine 2000 according to the manufacturer's instructions. Six hours later, transfected cells were incubated with 200 mM oleic acid/0.2% BSA overnight. Following the fixation with 3% paraformaldehyde in PBS for 20 min, cells were permeabilized by 0.5% triton X-100 for 5 min, quenched with 100 mM glycine in PBS for 20 min, and then blocked with 1% BSA in PBS for 1 hr. The cells were then exposed to primary antibody for 2 hr at room temperature. Following three washes with PBS, the cells were treated for 1 hr with Alexa Fluor secondary antibodies. To visualize lipid droplets, 1 mg/ml of Bodipy 493/503 dye was added during the incubation with secondary antibodies. Samples were mounted on glass slides with Vecta shield mounting medium and examined under a Zeiss LSM 510 inverted confocal microscope. Acquired images were processed and quantified manually with ImageJ FIJI software.
Assay for TG hydrolase activity
HeLa cells were transfected with ATGL-expressing plasmids using Lipofectamine 2000 overnight and lysed on ice by sonication in a lysis buffer (0.25 M sucrose, 1 mM EDTA, 1 mM Tris-HCl pH 7.4, 1 mM DTT, 20 mg/mL leupeptin, 2 mg/mL antipain and 1 mg/mL pepstatine). The cell extract was clarified by centrifugation at 15,000 g for 10 min, and the supernatant was used as the enzyme source for the assay of TG hydrolase activity. The TG lipase activity was determined using a lipid emulsion labeled with [9,10-3 H]-triolein as substrate. For HIG2 obtained from HeLa cells transfected with HIG2-expressing plasmids, 50 ml of HIG2 lysate was combined with 30 ml of ATGL lysate; For HIG2 derived from In vitro transcription/translation expression, 25 ml of In vitro transcription/translation reaction was combined with 25 ml of lysate buffer and 30 ml of ATGL lysate; For HIG2 purified from E. coli, 1 mg of protein diluted in 50 ml of lysate buffer and was combined with 30 ml of ATGL lysate. Then the 80 ml HIG2/ATGL mixture was incubated with 80 ml of substrate solution for 60 min at 37˚C. Reactions were terminated by adding 2.6 ml of methanol/chloroform/heptane (10:9:7) and 0.84 mL of 0.1 M potassium carbonate, 0.1 M boric acid (pH 10.5). Following centrifugation at 800 Â g for 15 min, radiolabeled fatty acids in 1 ml of upper phase were measured by liquid scintillation counting.
Lipolysis assay and measurement of cellular TG content
Lipolysis was measured as the rate of free fatty acid release. In brief, cells were cultured under normoxia or hypoxia with 10% FBS medium in the presence of 200 mM oleate/0.2% BSA complex. Twenty four hours later, cells were washed twice with PBS and incubated with serum-free medium without phenol red containing 1% BSA for another four hours. Then medium was collected and FAs released were determined by a FA assay kit according to the manufacturer's instructions. Lysates were then prepared from the remaining cells, and protein concentrations in the lysates were used to normalize FFA levels in the medium. The relative rate of lipolysis (%) was calculated based on the relative concentration of FFAs among the groups. For TG measurement, cells or human samples were lysed in lysis buffer (1% NP-40, 150 mM NaCl, 10 mM Tris-HCl, pH7.4). Equal volume of cell lysates were used to measure TGs using a triglyceride assay kit according to the manufacturer's instructions. TG concentration was calculated and normalized to protein contents.
HIF-1a knockdown by siRNA
Cells were seeded at 20-40% confluency and the day after transfected with 25 nM siRNA using Lipofectamine RNAiMAX according to the manufacturer's protocol. One day later, cells were incubated under normoxia or hypoxia with fresh McCoy's 5A Medium containing 10% FBS and processed for designated assays. The following stealth siRNA oligonucleotides (Invitrogen) were used for human HIF1a knockdown: 5'-GGGAUUAACUCAGUUUGAACUAACU-3 0 (sense) and 5 0 -AGUUAG UUCAAACUGAGUUAAUCCC-3'. Control oligonucleotides with comparable GC content were also purchased from Invitrogen.
Fatty acid oxidation measurement
Fatty acid oxidation was assessed on the basis of 3 H 2 O production from [9,10-3 H]-oleate as previously described with minor modifications unless otherwise stated. Cells in 6well plates were washed with PBS, and then incubated with 2 ml of BSA-complexed oleate (0.2 mM unlabeled plus 2 mCi/ml of [9, 10-3 H] oleate and 0.2% BSA) in serum-free medium. Six hours later, the medium was collected for measuring 3 H 2 O production. Briefly, excess [9,10-3 H]-oleate in the medium was removed by precipitating twice with an equal volume of 10% trichloroacetic acid with 2% BSA. After centrifugation at 15,000 Â g for 3 min at 4˚C, the supernatants (0.5 ml) were extracted with 2.5 ml of chloroform/methanol (2:1, v/v) and 1 ml of 2 M KCl/HCl (1:1, v/v), following by centrifuged at 3,000 Â g for 5 min. The aqueous phase containing 3 H 2 O was collected and subjected to liquid scintillation counting and data was normalized by protein contents.
ROS measurement and cell apoptosis assay by flow cytometry
For ROS measurements, cells were washed with PBS and stained with 2.5 mM H 2 DCFDA in PBS at 37˚C for 30 min. Then cells were trypsinized, washed with PBS and re-suspended in PBS. Stained cells were filtered and DCF fluorescence was measured using a FACSCelesta flow cytometer (BD Biosciences) and FACSDiva software. For apoptosis detections, cells were harvested and washed with PBS. Pellets were resuspended in 1X Annexin binding Buffer, and stained with Alexa Fluor 488 annexin V and PI for 15 min at room temperature. Stained cells were filtered and analyzed immediately by flow cytometer. Apoptosis data are presented as the percent fluorescence intensity of gated cells positive for Annexin V.
Quantification of viable cells 2.5 Â 10 5 cells were planted in 12 well plates 1 day before exposure to hypoxic conditions (0.5% O 2 ). At the times indicated, cells were trypsinized and the viable cells were counted using trypan blue dye exclusion test.
Lactate production and glucose consumption assay Cells were cultured under normoxia or hypoxia for 24 hr. Glucose and lactate levels in the culture medium were determined using a glucose assay kit and a lactate assay kit, respectively, according to manufacturer's instructions. Data were normalized to protein contents.
Immunohistochemistry
Xenograft tumors were fixed in 4% paraformaldehyde and embedded in paraffin. Sections were deparaffinized by baking slides at 55˚C for 15 min, rehydrated in xylene and series of ethanol solutions, and endogenous peroxidase activities were quenched by 3% H 2 O 2 for 10 min. Antigen retrieval was performed by pepsin digestion for 10 min and sections were blocked for 30 min in 2.5% normal horse serum. Then sections were incubated with anti-cleaved Caspase-3 or anti-4 HNE antibody (1:200 dilution) overnight at 4˚C. Sections were further incubated for 40 min with Imm-PRESS-AP Anti-Rabbit IgG Reagent (Vector Laboratories), and then processed with Permanent Red Substrate-Chromogen (Agilent Technologies) and Methyl Green solutions (Agilent Technologies) for immunohistochemistry staining. Sections were photographed at Â20 magnification, and the staining intensity was measured by using ImageJ FIJI software.
Oil red O staining
Frozen sections from xenograft tumors were fixed in 10% formalin, washed with propylene glycerol, and stained with 0.5% oil red O in propylene glycerol for 10 min at 60˚C. Then the slides were differentiated in 85% propylene glycerol, washed with water, and counterstained with Mayer's Hematoxylin. The red lipid droplets were visualized by microscopy.
Mouse xenografts
Male athymic nude mice, 6-7 weeks of age, were purchased from Taconic Biosciences. The experimental procedures were approved by the Mayo Clinic Institutional Animal Care and Use Committee. Animals arriving in our facility were randomly placed in cages with five mice each. They were implanted with respective tumor cells in the unit of cages, which were randomly selected. Cells were prepared in in phenol red-free culture medium without FBS. Two hundreds microliter of 3 Â 10 6 HCT116 clone cells or 2 Â 10 6 HeLa clone cells were injected subcutaneously into both flanks of nude mice. Tumor size was measured twice weekly by calipering in two dimensions and the tumor volume in mm3 is calculated by the formula: (width) 2 x length/2. After 25 days, mice were killed, and tumors were dissected and weighed.
TCGA data analysis
TCGA RNA-seq expression data were downloaded from the UCSC (University of California, Santa Cruz) cancer browser (https://genome-cancer.ucsc.edu/), in which the gene expression values were measured by log2 transformed, RSEM normalized read counts (Cline et al., 2013;Li and Dewey, 2011). Differential gene expression analyses between tumor and matched normal tissue for HIG2 were performed in 20 cancer types that have at least two available normal tissues. P values were evaluated using the Wilcoxon rank-sum test (i.e. Mann-Whitney U test).
Human tissue biospecimens
Frozen biospecimens were collected as previously described (Ho et al., 2015). The biobank protocol was approved by the Mayo Clinic Institutional Review Board (protocol no. 08-000980) and patient informed consent was obtained from all subjects. After review by a genitourinary pathologist, frozen matched tumors and adjacent uninvolved kidney were selected for further study. Criteria for selection included tumor sample composed of viable-appearing tumor cells with !60% tumor nuclei and 20% necrosis of sample volume.
Statistics and reproducibility
Sample sizes and statistical tests for each experiment are denoted in the figure legends or in the figures. Data analysis were performed using Excel software (2013) and values are expressed as mean ±SD or SEM. Values are expressed as mean ±SD or SEM. The unpaired two-tailed Student's t-test was used to determine the statistical significance of differences between means (p<0.05, p<0.01, p<0.001) unless otherwise indicated. All experiments were repeated independently at least three times with similar results, except for Mass Spectrometry, patient samples and animal experiments shown in Figure 2a,b, Figure 7a,b,c,d, Figure 8b and supplementary Figure 5a,b. There is no estimate of variation in each group of data and the variance is similar between the groups. No statistical method was used to predetermine sample size. The investigators were not blinded to allocation during experiments and outcome assessment. All data were expected to have normal distribution. None of the samples/animals was excluded from the experiment. | 2018-04-03T05:17:51.344Z | 2017-12-19T00:00:00.000 | {
"year": 2017,
"sha1": "b73b30380b0ced7fed92878a9c0ebfb8d058482a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7554/elife.31132",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e0effa728f711514b76a8fc3070a754b0f25baec",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
232404245 | pes2o/s2orc | v3-fos-license | CalibDNN: Multimodal Sensor Calibration for Perception Using Deep Neural Networks
Current perception systems often carry multimodal imagers and sensors such as 2D cameras and 3D LiDAR sensors. To fuse and utilize the data for downstream perception tasks, robust and accurate calibration of the multimodal sensor data is essential. We propose a novel deep learning-driven technique (CalibDNN) for accurate calibration among multimodal sensor, specifically LiDAR-Camera pairs. The key innovation of the proposed work is that it does not require any specific calibration targets or hardware assistants, and the entire processing is fully automatic with a single model and single iteration. Results comparison among different methods and extensive experiments on different datasets demonstrates the state-of-the-art performance.
INTRODUCTION
The ability to leverage simultaneously multimodal sensor information is critical for many intelligent perception tasks including autonomous driving, robot navigation, and sensor-driven situational awareness. Modern perception systems often carry multimodal imagers and sensors such as electro-optical/infrared (EO/IR) cameras and LiDAR sensors, with future expectation of additional modalities. Robust and accurate estimation of their extrinsic (and intrinsic) parameters is essential to fuse and utilize the multimodal data for downstream perception tasks. While a number of techniques have been proposed, specifically for the LiDAR-camera registration problem, most of the existing methods 1-4 are target-based that require specific environment with targets, complex setups and significant amounts of manual efforts. In addition, dynamic, online calibration of the deviations caused by sensor vibrations and environmental changes are difficult with existing methods. Target-less methods have been proposed, 5-9 but they still require accurate initialization, parameters fine-tuning or precise motion estimates and significant amounts of data.
In this work, we develop a new deep learning-driven technique for accurate calibration of LiDAR-Camera pair, which is completely data-driven, does not require any specific calibration targets or hardware assistants, and the entire processing is end-to-end and full automatic. Recent application of deep learning in sensor calibration has shown promising results. [10][11][12][13][14][15][16][17] We leverage these recent achievements and propose a deep learning-based technique CalibDNN. We model the data calibration as parameter regression problem and utilize advanced deep neural network to align accurately the LiDAR point cloud to the image to regress 6DoF extrinsic calibration parameters. Geometric supervisions, including depth map loss and point cloud loss, and transformation supervision are employed to guide the learning process to maximize the consistency of input images and point clouds. Given LiDAR-Camera pairs, the system automatically learns meaningful features, infers modal cross-correlations, and estimates the 6DoF rigid body transformation between the 3D LiDAR and 2D camera in real-time.
Our main contributions are: (1) Proposing a novel network architecture for LiDAR-Camera calibration. The system is simple with a single model and single iteration, which can also be extended into multi-iterations, dealing with larger miscalibration ranges.
(2) Defining powerful geometric and transformation supervisions to guide the learning process to maximize the consistency of multimodal data.
(3) Further pushing the deep learning-based calibration to the real-world application via applying it on a challenging dataset with complex and diverse scenes.
RELATED WORKS
Different methods have been proposed to solve the problem of multimodal sensor calibration. Traditional methods can be categorized into target-based and target-less techniques.
For target-based techniques, Zhang and Pless 1 proposed an extrinsic calibration approach between a camera and a laser range finder. They made a checkerboard visible to both camera and laser range finder in a scene, to obtain many constraints of calibration parameters under different poses of the checkerboard, and then solved the parameters by minimizing algebraic error and a re-projection error. Using several checkerboards as targets, Geiger et al. 2 presented an automatic extrinsic calibration method between camera-camera and range sensorcamera. They needed a single shot for each sensor and a particular calibration setup. Using a novel 3D marker, Velas et al. 3 proposed a coarse to fine approach for pose and orientation estimation. Guindel and Beltr an et al. 4 employed a target with four symmetrical circular holes and then utilized two stages of segmentation and registration.
One of the first target-less techniques was proposed by Scaramuzz et al. 5 They manually selected correspondent points from the overlap view between 3D LRF and camera and then used the PnP algorithm followed by a nonlinear refinement. With no need for human labeling, Levinson and Thrun 6 proposed another online targetless calibration method. They optimized the alignment between the depth discontinuities in Laser with image edges by a grid search, when a sudden miscalibration was detected under a probabilistic monitoring algorithm. Pandey et al. 7 proposed a mutual information (MI) based target-less calibration algorithm. They estimated parameters by maximizing MI, using Barzilai-Borwein steepest gradient ascent. Based on motion-based techniques, Ishikawa and Oishi et al. 8 extended the hand-eye calibration framework to camera-LiDAR calibration. They first initialized parameters from camera motion and LiDAR motion, and then iteratively alternated camera motion and extrinsic parameters by sensor-fusion odometry until convergence. Park et al. 9 further improved the motion-based method by introducing a structureless stage where 3D features were computed from triangulation. Although parameters initialization and sensor overlap were not required in these methods, the performance still depended on motion estimates and a large number of data.
With the recent rapid development of deep learning, its application to computer vision have shown a tremendous success. Its application of extrinsic calibration is a new topic that also makes great progress. Kendall et al. 10 proposed the PoseNet that used a convolutional neural network to regress on the camera location and orientation. Schneide et al. 11 presented the RegNet, in which they used three parts, including feature extraction, feature matching, and global regression, to regress on extrinsic parameters, based on Network in Network. 12 Iyer et al. 13 proposed CalibNet using ResNet and the same idea of three parts with RegNet. Instead of directly regress on extrinsic parameters, they utilized a geometrically supervised method with photometric loss and point cloud loss. Based on the success of RegNet and CalibNet, Shi et al. 14 and Lv et al. 15 proposed calibration methods by adding cost volume and Recurrent Convolutional Neural Network. SOIC, proposed by Wang et al., 16 employed semantic centroids to convert the initialization problem to the PnP problem. They also used the cost function based on the correspondence of semantic elements between image and point cloud. Yuan et al. 17 proposed the RGGNet to utilize a deep generative model and Riemannian geometry for online calibration.
METHODOLOGY
We aim to design an end-to-end model for multimodal sensor calibration, which paves the way for downstream scene understanding tasks, for example, semantic segmentation. With point clouds from a LiDAR and RGB images from a camera as input pairs, the calibration output is the 6-DoF extrinsic parameters, which define the orientation and translation between the LiDAR and camera sensors. In this section, we will explain the details of the method, including system overview, data preprocessing, network architecture, loss function, and extend to iterative refinement method. Figure 1 shows the system overview of the proposed CalibDNN. The output is the 6-DoF parameters, including a 3-DoF rotation vector and a 3-DoF translation vector. We take the point cloud obtained from the a LiDAR and the correspondent RGB image obtained from a camera as the input. We do not know the extrinsic parameters between the LiDAR and camera in the real world, but we do need pre-calibrated data samples to generate ground truth data for training network model. Therefore, when projecting the point cloud onto the image plane, the so-called depth map, we add a random transformation in the extrinsic parameters to get a miscalibrated depth map used for model training. After which, we feed the miscalibrated depth map and the correspondent RGB image into the calibration network to predict the rotation and translation vectors, which will be used to compute the transformation loss. We also apply the predicted transformation matrix, converted from predicted vectors, onto the input depth maps and ground truth transformation onto the input depth maps to compute the depth map loss. We can also obtain the ground truth and predicted point clouds through back-projection from the correspondent depth maps to calculate the point cloud loss.
Data preprocessing
As discussed in the last subsection, the inputs are pairs of the point clouds and RGB images. We need to convert the 3-D point cloud to the 2-D depth map so that each pixel in the depth map represents the distance information, from the camera to the correspondent point in the real world. Given intrinsic parameters P and extrinsic parameters T the projection formula is defined as: Where x represents 3-D points in a point cloud and y represents correspondent 2-D points in the converted depth map.
Our method requires calibrated data pairs of image-point clouds, or image-depth map, for training network. To obtain the uncalibrated depth map, we intendedly miscalibrate the calibrated depth map by applying a random transformation T rand and use as the input depth map. Therefore, the extrinsic parameters after adding the random transform are T rand · T , and the input depth map, projected from the point cloud, is defined as: Given the miscalibrated depth map as input, the ground truth depth map is defined as: Therefore, the target is to regress on the ground truth transformation T gt = T rand −1 Since the converted depth map is often too sparse to extract feature information, we apply data interpolation of max-pooling operation onto the sparse depth map to produce a semi-sparse depth map.
Network architecture
As shown in Figure 2, there are two parts in the calibration network, including feature extraction and feature aggregation. Each part will be introduced in detail in this section.
Feature Extraction: There are two network branches to extract features from the RGB image and depth map separately. Thanks to the recent success of ResNet, 18 we use ResNet-18 as the architecture to extract features from the two branches. The two branches are symmetric and initialized by pre-trained weights since, as mentioned in the research works, 19,20 pre-trained weights can also boost feature extraction on the depth map. Additionally, feature relevance can be preserved using the same initialization and architecture, which contributes to feature matching in the aggregation part. To apply the pre-trained weights on the one channel depth map, we initialize the filter weights of the first convolutional layer by the mean weights along three channels.
Feature Aggregation: Obtaining features from two branches, we concatenate the extracted features along the channel dimension and input them into the aggregation network. This part is as important as the former part, as accurate prediction is highly dependent on the feature matching power, so exquisite network design is critical. Inspired by the architecture of ResNet, we concatenate two layers with a similar structure as layers in ResNet. Unlike ResNet, to reduce dimension, we use half the number of channels in the second layer. Then we input the features into a convolutional layer for further feature matching and dimension reduction. Finally, we decouple the output into two identical branches to predict rotation and translation vectors separately to predict extrinsic parameters. In each branch, we use a convolutional layer, with 1 × 1 filters, to keep structural feature information and a fully connected layer to predict the vectors. Compared with using one branch to predict the dual quaternion, the performance of separate vector prediction is better since the separate convolutional layer and fully connected layer can automatically learn different translation and rotation information. Without pre-trained weights, we initialize the weights by He-Normal, 21 which gives a more efficient prediction.
Model generalization: The system can also be generalized into different input image sizes by image downsampling or change network architecture. For example, given a larger input image size, we can add some average pooling layers in the feature aggregation part. Instead of using max-pooling layers, correspondent information along the channel is kept by average pooling. Some different input image size experiments are conducted on the RELLIS-3D dataset. 22 Given a smaller input image size, we can eliminate the gray convolutional layer in Figure 2 to fit the image size.
Parameters conversion: With the predicted rotation vector r = (r x , r y , r z ) T and translation vector t = (t x , t y , t z ) T , we need to convert them to the transformation matrix for further loss function calculation. The translation vector can be directly used as a translation term in the transformation matrix. However, the rotation vector needs to be converted to rotation matrix R ∈ SO(3) by the well-known Rodrigues' rotation formula: Where I is an identity matrix, r is the antisymmetric matrix of rotation vector r, and θ is a rotation angle. Combined with the translation vector t, we get the predicted transformation matrix T pred ∈ SE(3):
Loss function
We use the weighted sum of three types of loss functions as the total loss, including transformation loss L t , depth map loss L d and point cloud loss L p . The total loss function is defined as follow: Where λ t , λ d , and λ p denote the respective loss weight.
Transformation loss:
The target is to regress on the rotation and translation vectors, which are the output of the calibration network. Therefore, we compute the L-2 norm between the prediction and the ground truth separately on the rotation vector and translation vector.
Where r pred is the predicted rotation vector, r gt is the ground truth rotation vector, t pred is the predicted translation vector, and t gt is the ground truth translation vector. Since there is a deviation between the scale of rotation L-2 norm and translation L-2 norm, we add a scalar α to control the rotation term.
Depth map loss: Given predicted transformation matrix T pred , we apply the transformation to the input depth map and obtain the predicted depth map. For the input depth map with a significant miscalibration deviation, if we directly apply the predicted transformation to them, there will be a large blank area because those points projected outside the image are missing. We apply the transformation to the conversion between the point cloud data and depth map to recover the missing points. Employing this approach, we can directly input the point cloud into the system instead of converting the point cloud to the depth map first. Below formula shows the relationship between the predicted and ground truth depth maps.: Where x is the 3D points in point cloud, y pred is the correspondent 2D points in predicted depth map, y gt is the correspondent 2D points in ground truth depth map, and T rand is the random transformation. The depth map loss between predicted and ground truth depth maps is defined as: Where N is the number of pixels in the depth map.
Point cloud loss: Given intrinsic camera parameters, we can back-project the depth map to the point cloud data. Thus, we can get predicted point cloud and ground truth point cloud by back-projection from predicted depth map and ground truth depth map. We use the Chamfer Distance 23 (CD) between these two point clouds as the loss function. Compared with the testing accuracy on Earth Mover's distance loss (EMD), the performance of using Chamfer Distance loss is better since it preserves unordered information. To keep point cloud loss the same order of magnitudes with transformation loss, we also add scalars to compute the mean minimum distance between two point clouds. The loss function is defined as: Where S pred is the predicted point cloud, S gt is the ground truth point cloud, N is the number of points in S pred and M is the number of points in S gt .
Iterative refinement
We find that there is a limitation on the optimization process during training. The mean rotation and translation errors can only be reduced by a specific range, which means, when the miscalibration value is large enough, the mean errors cannot be reduced to a value small enough. However, although our model is a single iteration, it still can be extended to multi-iteration to boost performance when miscalibration ranges are significant. We can apply iterative refinement to improve the prediction model. We train different networks by different miscalibration ranges, specifically from large to small ranges. After that, given the input pairs with large miscalibration, we test it by feeding them into the models pre-trained from large miscalibration to small miscalibration, step by step. The process of iterative refinement is shown in Figure 3.
In Fig. 3, (r i , t i ) are the miscalibration ranges in the form of rotation and translation vectors that meet the conditions of r 0 > r 1 > r 2 > · · · > r i and t 0 > t 1 > t 2 > · · · > t i . T i is the predicted transformation of each network. Thus, the final predicted transformation is defined as:
Dataset preparation
We use the KITTI dataset, 24 the most popular dataset used in autonomous driving research. We utilize the raw data from the KITTI dataset, including RGB images obtained from the PointGray Flea2 color camera and point clouds obtained from the Velodyne HDL-64E rotating 3D laser scanner. We use the synchronized and rectified version of sequence '2011 09 26', with image resolution 1242 × 375. There are different types of scenes in this sequence, including city, residential, and road, which give considerable information for network training. The dataset is already calibrated by a traditional method. 2 As discussed in subsection 3.2, to get the uncalibrated image pairs, we apply a random transformation to the depth map. To be consistent with the CalibNet, 13 we generate 24,000 pairs of training images and 6,000 pairs of testing images, and use miscalibration in the range of (−10 • , +10 • ) on rotation and (−0.25m, +0.25m) on translation. To test the generalization power of our method, we also use the sequence '2011 09 30', with image resolution 1226 × 370, on which we apply zero paddings to warp the image into the same size with sequence '2011 09 26'. Different sequences are captured from different dates and scenes, with individual extrinsic parameters.
Compared with the urban scene in the KITTI dataset, the RELLIS-3D dataset is collected in an off-road environment, which provides strong texture, complex terrain, and unstructured features, with grass, bush, forest, and soil. It is challenging for most current algorithms that are mainly trained for urban scenes. We use RGB images obtained from the Basler acA1920-50gc camera and correspondent point clouds obtained from the Ouster OS1 LiDAR. To keep consistent with the image size of the KITTI dataset, we downsample the original images and depth maps, with the size of 1200 × 1920, into the size of 1242 × 375. We generate a training set of 20,000 pairs and a testing set of 5,000 pairs by the same miscalibration range as in the KITTI dataset.
Training details
We use the Tensorflow library 25 to build the network. To accelerate network training and prevent overfitting, we use Batch normalization 26 after each convolutional block and Dropout 27 at fully connected layers. We set the dropout parameter as 0.5. We also add an L-2 regularization term when training on the Rellis-3D dataset. We use Adam Optimizer 28 and set the parameters as the suggested values β 1 =0.9, β 2 =0.999 and ε=1e − 8. We use an initial learning rate 1e − 3 and reduce it every a few epochs. For the loss function, we set the initial λ t equal to 4, λ d equal to 1 and λ p equal to 40 to keep them on a similar scale. Then, we keep λ d unchanged and slowly reduce λ t and λ p . We train 25 epochs in total. We train our model on a 2 GPUs sever machine. Figure 4 shows examples of calibration results in different scenes, which shows that our model can accurately calibrate the point cloud and RGB image in different scenes, from small to large miscalibration ranges.
Results and evaluations
To fairly compare with CalibNet, we evaluate the model performance by the geodesic distance over rotation transformation and absolute error over translation separately, which are defined as: Where ε r is the geodesic distance of the rotation matrix, R pred is the predicted rotation matrix, and R gt is the ground truth rotation matrix; ε t is the absolute error of the translation vector, t pred is the predicted translation vector, and t gt is the ground truth translation vector.
We report a mean absolute error (MAE) for rotation prediction of (Roll: 0.11, Pitch: 0.35, Yaw: 0.18) and translation prediction of (X: 0.038, Y: 0.018, Z: 0.096) on the testing set. Figure 5 shows the evaluation curve, in which (a) shows the low rotation absolute errors over the large range of miscalibration, and (c) shows the geodesic distances of most instances concentrates near 0. (b) shows the low translation absolute errors against the large range of miscalibration, and (d) shows the absolute translation errors of most instances concentrates near 0. The transparent colormap is the depth map, from blue to red corresponding small to a large distance. The first row is the input RGB images. The second row is input depth maps, the third row is predicted depth maps, the fourth row is ground truth depth maps, and each depth map is overlaid onto the RGB images. The red rectangle boxes in the second row represent the misalignment between the input depth maps and RGB images, and in the third row represent the accurate alignment between the predicted depth maps and RGB images.
Comparison with the state-of-the-art methods: Table 1 shows the performance comparison among our methods, RegNet, 11 CalibNet, 13 and CalibRCNN. 14 In RegNet, authors use iterative refinement, for which they estimate the extrinsic parameters from different networks, trained with miscalibration ranges from large to small, iteratively. Also, the whole miscalibration range of RegNet is larger than ours, which is (-1.5m, 1.5m)/(-20°,20°); therefore, it is hard to make a rigorous comparison between our CalibDNN and RegNet. We try to keep consistency with CalibNet on the experiment conditions, including the same miscalibration ranges and the same number of training samples, which lead to comparable results between our method and the CalibNet. Although it can be extended to multi-iteration, our current method uses only one model with one iteration, but CalibNet uses multiple models with two strategies to fine-tune the translation values after obtaining the rotation values in the first model. One strategy is given ground truth rotation parameters, they predict the translation parameters separately (the 2nd row in Table 1); the other is given rotation estimation, they predict the translation parameters separately by an iterative re-alignment methodology (the 3rd row in Table 1). It is fair to compare our results with the second strategy of CalibNet. We also list the result of CalibRCNN, which is also a single model, and they use the same miscalibration ranges as us.
Therefore, we focus on the comparison among the results in the last three rows in Table 1, in which the best results are indicated by boldface. Our method CalibDNN in the last row outperforms the other two methods in almost all axes, especially the pitch and X-axis. To be more specific, compared with CalibNet, our method is simpler with just a single model and single iteration but achieves better performance. Compared with CalibRCNN, our method is still far beyond its performance. The excellent performance comes from the exquisite design of network architecture and reasonable training strategy.
Testing on urban scene dataset: To evaluate the generalization ability of the proposed model, we use the KITTI dataset of sequence '2011 09 30' and Rellis-3D dataset as the testing sets and evaluate the model performance trained on the KTTI sequence '2011 09 26. Figure 6 shows the results of KITTI sequence '2011 09 30' in different scenes, with the same results layout with Figure 5. We observe that the misalignment is rectified in prediction, which implies the accurate prediction of extrinsic parameters. For sequence '2011 09 30', we report the MAE rotation prediction of (Roll: 0.15, Pitch: 0.99, Yaw: 0.20) and translation prediction of (X: 0.055, Y: 0.032, Z: 0.096). The MAE is slightly larger because of the different scenes. For Rellis-3D, the MAE rotation prediction is (Roll: 1.40, Pitch: 3.44, Yaw: 2.33) and translation prediction is (X: 0.101, Y: 0.121, Z: 0.186). The performance is worse since the training scenes are significantly different with the testing scenes, from urban to the field. This scene is challenging for calibration because of the complex and unstructured feature.
Training on terrain scene dataset: Since the scene of KITTI and Rellis-3D is totally different, one is the urban environment, and the other is the off-road environment, we re-train the model on Rellis-3D and show the testing results in different scenes in Figure 7, with the same layout as Figure 5. Although there is a visually detected deviation between prediction and ground truth, the misalignment of the input depth map is still rectified significantly. We report the MAE rotation prediction of (Roll: 1.00, Pitch: 2.57, Yaw: 1.94) and translation prediction of (X: 0.092, Y: 0.074, Z: 0.082). Compared with the MAE of Rellis-3D in last subsection, the performance is much better. Although not as good as on the KITTI dataset, the performance is still acceptable since the original miscalibration is still reduced significantly. Also, we find the original calibration of Rellis-3D is not very accurate, which also leads to a large MAE. We believe our CalibDNN rectifies the inaccurate calibration to some extent.
CONCLUSION
This paper proposes a novel approach for multimodal sensor calibration to predict the 6 DoF extrinsic parameters between the RGB camera and 3D LiDAR sensor. The calibration process is critical for multimodal scene perception since an accurate calibration paves the way for later information fusion. Unlike traditional calibration techniques, we utilize advanced deep learning to solve the challenging sensor calibration problem. The developed approach is fully data-driven, end-to-end, and automatic. The network model is simpler comparing with existing state-of-art methods with a single model and single iteration, combining the merits of geometric supervision and regression on transformation; the performance of our model is state-of-the-art. Given miscalibration range (-10°,10°) in rotation and (-0.25m, 0.25m) in translation, the model achieves a mean absolute error of 0.21°in rotation and 5.07cm in translation. Training and testing on challenging datasets are conducted, to demonstrate the value and utility of the developed approach to the real world applications. In the future, we will further improve the performance and apply iterative refinement to deal with a larger miscalibration range. Additionaly, we will explore the non-DNN, non-backpropagation learning network which is also an interesting topic. | 2021-03-30T01:16:17.702Z | 2021-03-27T00:00:00.000 | {
"year": 2021,
"sha1": "1659c46e626bb917291ed9da32c5f0d00518dd23",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2103.14793",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1659c46e626bb917291ed9da32c5f0d00518dd23",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering",
"Computer Science"
]
} |
2429416 | pes2o/s2orc | v3-fos-license | Towards the creation of decellularized organ constructs using irreversible electroporation and active mechanical perfusion
Background Despite advances in transplant surgery and general medicine, the number of patients awaiting transplant organs continues to grow, while the supply of organs does not. This work outlines a method of organ decellularization using non-thermal irreversible electroporation (N-TIRE) which, in combination with reseeding, may help supplement the supply of organs for transplant. Methods In our study, brief but intense electric pulses were applied to porcine livers while under active low temperature cardio-emulation perfusion. Histological analysis and lesion measurements were used to determine the effects of the pulses in decellularizing the livers as a first step towards the development of extracellular scaffolds that may be used with stem cell reseeding. A dynamic conductivity numerical model was developed to simulate the treatment parameters used and determine an irreversible electroporation threshold. Results Ninety-nine individual 1000 V/cm 100-μs square pulses with repetition rates between 0.25 and 4 Hz were found to produce a lesion within 24 hours post-treatment. The livers maintained intact bile ducts and vascular structures while demonstrating hepatocytic cord disruption and cell delamination from cord basal laminae after 24 hours of perfusion. A numerical model found an electric field threshold of 423 V/cm under specific experimental conditions, which may be used in the future to plan treatments for the decellularization of entire organs. Analysis of the pulse repetition rate shows that the largest treated area and the lowest interstitial density score was achieved for a pulse frequency of 1 Hz. After 24 hours of perfusion, a maximum density score reduction of 58.5 percent had been achieved. Conclusions This method is the first effort towards creating decellularized tissue scaffolds that could be used for organ transplantation using N-TIRE. In addition, it provides a versatile platform to study the effects of pulse parameters such as pulse length, repetition rate, and field strength on whole organ structures.
Background
Over the past fifty years, organ transplantation has become a standard care for patients diagnosed with end stage organ failure including cirrhosis and renal failure. Liver transplantation is very successful, with 90 and 75% survival rates after 1 and 5 years, respectively. Unfortunately, the number of patients with cirrhosis, chronic viral hepatitis and hepatocellular carcinoma has steadily increased, leading to unmet demands for organ transplantation [1]. According to the United Network of Organ Sharing (UNOS), there are over 108,000 candidates in the US alone currently waiting for organ transplants including kidney, liver, heart, and lung. In 2009, there were fewer than 7,000 liver transplants from both living and deceased donors [2].
Despite advances in transplant surgery and general medicine, the number of patients awaiting transplant organs continues to grow, while organ supply does not. Organ supply is constrained by obstacles that impede acquisition, such as the requirement for organ removal coincident with brainstem death necessitating the use of hospital resources to maintain artificial life support. As a result, organ donation may be problematic when intensive care resources are strained [3]. In addition, life support for potential organ donations has been ethically debated [4,5] and donation refusal is common in regions where social, cultural, and religious pressures constrain organ procurement.
The increasing gap between organ donation and supply to severely-ill patients has fostered an increased interest in alternative organ sources [6]. For the development and differentiation of full organs suitable for human transplant, structures that provide microvasculature for the delivery of nutrients to all cells must be developed [7][8][9]. Traditional top-down manufacturing techniques are currently unable to produce a hierarchical vascular structure scale which can span the more than 4 orders of magnitude of human organs [10]. Microfabrication techniques can replicate some features of the complex architecture of mammalian microvasculature, but current processes fail to extend into the macro-scale [11]. Thus, structures which have features spanning multiple length scales are currently only fabricated through biological mechanisms and the relatively new field of biofabrication has developed, with the goal of utilizing and manipulating these processes [12].
Decellularization of existing tissues extends the concept of biofabrication by taking advantage of the body's natural programming to create a complete tissue, including a functional vascular network. Rat liver extracellular matrix constructs have been created using chemical decellularization and reseeding [13][14][15]. Decellularized rat hearts, reseeded with multiple cell types, can contract and have the ability to generate pumping pressures [16]. Challenges to chemical decellularization techniques include the potential for detergents to damage extracellular matrix components [17,18] the potential to create and deposit toxins [13,17], and the inherent difficulty of scaling these techniques up from small rat organs to larger organs [14]. These challenges must be overcome before decellularized organs can successfully be translated to the clinical setting.
Xenotransplantation, or the transplantation of animal organs, is one potential solution to the future organ shortages [19]. Porcine xenotransplants have shown considerable potential but have failed to become widely accepted or used clinically. Transplantation of porcine pancreatic islets has recently been shown to temporarily reverse diabetes mellitus [20,21] and the use of T-cell tolerance protocols have demonstrated feasibility of long-term renal xeonograft transplantation in a non-human primate model [22]. Additionally, it has been shown that explanted porcine livers have the ability to clear ammonium and restore coagulation while under short term perfusion of human plasma [23,24]. Unfortunately, the mechanisms of graft loss and rejection in these transplants are still not well understood, and immunological rejection remains a significant barrier to successful transplantation [25].
Hypothermic oxygenated perfusion (HOPE) is a method of whole organ preservation which mechanically delivers an oxygenated, nutrient-rich blood substitute to an entire organ at sub-physiological temperatures [26,27]. This method has been successfully demonstrated to improve the preservation quality and transplant success rates of kidneys which have undergone warm ischemia [28,29]; with research striving to reach 72 hour preservation times [30]. In addition, Schon et al. [31] and Brockmann et al. [32] have demonstrated the ability to prolong organ quality using normothermic perfusion, a process in which the perfused fluid is held at or near physiological temperatures. These methods of organ preservation can be used to isolate N-TIRE tissue ablation effects from the immune response observed in vivo and the natural degradation of tissue post mortem.
Electroporation is a non-linear biophysical process in which the application of pulsed electric fields leads to an increase in permeability of cells, presumably through the creation of nanoscale pores in the lipid bilayer [33]. At low pulsing energy, this permeability is reversible and cellular health and function is maintained. Once a critical electric field intensity threshold is surpassed (approximately 500 [34] to 700 V/cm [35] for ninety 50 μs pulses at 4 Hz in brain and eight 100 μs pulses at 1 Hz in liver, respectively), the cell membrane is unable to recover and cell death is induced in a precise and controllable manner with sub-millimeter resolution [36,37]. This process is referred to as non-thermal irreversible electroporation (N-TIRE) [38]. N-TIRE does not rely on thermal mechanisms [38] and preserves the structure of the underlying extracellular matrix as well as nerve conduits and bile ducts [39]. Since N-TIRE cell death does not require any drugs, there should not be any creation or deposition of toxins when killing the cells from this technique.
Recently, we and others have determined, through the use of translational laboratory models, that capitalizing on the ability of N-TIRE to destroy cells without destroying the extracellular matrix might make N-TIRE a viable means for scaffold creation via organ decellularization [40,41]. We hypothesize that viable decellularized tissue scaffolds can be obtained using non-thermal irreversible electroporation (N-TIRE) on organs under continuous perfusion.
Machine-perfused porcine livers were treated with N-TIRE using external plate or needle electrodes within one hour of organ harvest and establishment of active perfusion. At varying time points after electroporation, livers were removed from perfusion, immediately after which samples were collected, preserved in 10% neutral buffered formalin, prepared for histology, and their microscopic structure was examined. Examination of the N-TIRE treated and control (untreated) regions of tissue demonstrated that N-TIRE was capable of decellularizing large volumes of tissue when performed in conjunction with active organ perfusion, suggesting that N-TIRE may be a viable method of decellularization for tissue engineering applications.
Tissue
Young mixed breed pigs were sacrificed via barbiturate overdose. Livers were harvested and placed on ice within 15 minutes of death. Vascular anastomosis with the perfusion system was created by inserting Luer lock syringe connections into the portal vein, hepatic artery, and major hepatic vein and then secured with zip ties. The livers were flushed with lactated Ringer's solution (LRS) to remove blood/clots before placement on the perfusion system.
Perfusion
The VasoWave™ Perfusion System (Smart Perfusion, Denver, NC) was used to perfuse the livers for 4 and 24 hours. This system produces a cardioemulating pulse wave to generate physiological systolic and diastolic pressures and flow rates within the organ. The system is capable of controlling the oxygen content of the perfusate above and below physiological norms. A perfusate, consisting of modified LRS, was delivered to the portal vein and hepatic artery and recycled back into the system via the hepatic vein. All livers were under active machine perfusion within one hour post-mortem and the perfusate was held at 4°C
Electroporation
The ECM 830 Square Wave BTX Electroporation System (Harvard Apparatus, Cambridge, MA) was used to deliver low-energy pulses to the liver tissue while it was on ice undergoing active perfusion with the solution maintained at 21°C. Two metal plate electrodes, 2 cm in diameter, were attached to a pair of ratcheting vice grips (38 mm, Irwin Quick-Grips) using Velcro. High voltage wire was used to connect the electrodes to the BTX unit. The electrodes were clamped gently to the liver and the center-tocenter distance between the electrodes was measured. The voltage output on the BTX unit was adjusted such that the approximate applied electric field was 1000 V/cm. Then, ninety-nine individual 100-μs square pulses were administered at repetition rates of 0.25, 0.5, 1.0 and 4.0 Hz. Repetition rates trials were performed at random and repeated a minimum of three times. Sham controls were performed by placing the electrodes over the tissue without delivering any pulses. Since needle electrodes are typically employed in clinical applications of IRE, two additional trials were performed using needle electrodes separated by 0.5 cm, inserted into the tissue approximately 1 cm, using a voltage-to-distance ratio of 1500 V/cm at rates of 1 and 4 Hz. The experimental setup for plate electrodes is illustrated in Figure 1a. All N-TIRE treatments were completed within two hours post mortem. The surface lesion created at each treatment site was measured at the end of the 24 hour perfusion period. Statistical analysis of the lesion diameters was conducted using JMP 8.0 (SAS Institute Inc., South Cary, NC) via Student's t-test with a 0.1 a level. Histological images were imported into ImageJ (Version 1.43u, NIH, USA). For each sample, a binary image was created using the threshold tool based on a sample selected within an acellular region. An average pixel value for each image was calculated using the measure RGB plugin and a density score was created by normalizing these values to 1; where 1 corresponds to regions filled with cells and extracellular material and 0 to a region completely devoid of material. Samples were analyzed for statistical significance in JMP via Student's t-test with a maximum 0.05 a level and a minimum of 6 samples for each treatment group.
Tissue preparation
Following N-TIRE treatment and machine perfusion, livers were disconnected from the VasoWave™ system, immediately sectioned to preserve lesions, and tissues were immediately fixed by immersion in 10% neutral buffered formalin solution. After fixation, tissues were trimmed and processed for routine paraffin embedding, then sectioned at 4 micrometers, and stained with hematoxylin-eosin (H&E) or Masson's trichrome stain. Tissue sections were evaluated by a veterinary pathologist who had no knowledge of the N-TIRE treatment parameters.
Numerical model
Numerical modeling can be used to predict the electric field distribution, and thus provide insight into the N-TIRE treatment regions in tissue [42,43]. This has been chosen as the method to correlate lesion volume with electric field in the liver. The methods for predicting N-TIRE areas are similar to the ones described by Sel et al. [35]. In order to understand the effective electric field threshold to induce N-TIRE in the liver, finite element simulations were conducted using Comsol Multiphysics 3.5a (Comsol, Stockholm, Sweden). The numerical model was constructed using 2 cm diameter plates, each 1 mm thick, placed above and below the tissue. The model was generated in an axis symmetric platform and the conductivity changes incorporated the effects of electroporation and temperature as described by Garcia et al. [34], with identical parameters from [35] and its physical setup may be seen in Figure 2. The electric field distribution is given by solving the Laplace equation: where σ is the electric conductivity of the tissue and is the potential. The electrical boundary condition along the tissue that is in contact with the energized electrode is = V o . The electrical boundary condition at the interface of the other electrode is = 0.
The boundaries where the analyzed domain is not in contact with an electrode are treated as electrical insulation.
Conductivity changes due to electroporation and temperature have been modeled to calculate the dynamic conductivity according to the following equation: where s 0 is the baseline conductivity. flc2hs is a smoothed heavyside function with a continuous second derivative that ensures convergence of the numerical solution. This function is defined in Comsol, and it changes from zero to one when normE_dc -E delta = 0 over the range E range . However, any continuous step function may be used to model the conductivity change, depending on the application, such as the sigmoidal ones proposed in [35,44]. In the flc2hs function that mimics the sigmoidal ones, nor-mE_dc is the magnitude of the electric field, and E delta is the magnitude of the electric field at which the transition occurs over the range, E range . In the simulations, we used E delta = 580 V/cm and E range = ±120 V/cm. These values were selected from the literature in which models incorporated conductivity changes due to electroporation and were validated with real-time measurements in rat and rabbit liver [35,45]. The baseline tissue conductivity was set to 0.067 S/m [35], and N-TIRE affected tissue was considered to increase by a factor of 3.6 as determined by Sel et al. [35,46], reaching a final conductivity of 0.241 S/m. The electric field within the tissue domain was first determined using a conductivity of 0.067 S/m, adjusted to incorporate the dynamic conductivity, and reevaluated to determine the final electric field distribution. This numerical model was solved for the pulse parameters that produced the maximum thermal effects (i.e. 1500 V/cm at 4 Hz) used on the livers in order to obtain a simulation of the electric field to which the tissue was exposed using the 1.5%°C -1 (Δσ/σ/ΔT) temperature coefficient in electrical conductivity. The temperature was calculated with the Penne's bioheat equation with the additional joule heating term [47] and with the values of the liver tissue heat capacity (c p = 3.6 kJ·kg -1 K -1 ), thermal conductivity (k = 0.512 W·m -1 K -1 ), density (ρ = 1050 kg·m -3 ), blood perfusion per unit volume (w b = 1 kg·m -3 s -1 ), and the heat capacity of blood (c p = (3.64 kJ·kg -1 K -1 ) taken from the literature [48,49]. The outer surface of the analyzed liver domain and top electrode surface was mathematically considered to have proportional loss to air due to convective heat transfer, h = 10W/(m 2 · K), as in [50] with T ∞ = 21°C. The electrode-tissue boundaries were treated as continuity. The N-TIRE electric field thresholds were then found by measuring lesion dimensions and determining the electric field value at this region in the model after the completion of the 99 pulses and thus incorporating all the thermal effects as well.
Results
Surface lesions develop during perfusion within 30 minutes initiating of treatment. The area of these on the liver surface created by plate electrodes were larger than, but the same type as that from the needle electrodes. In Figure 1b, a 3.3 cm surface lesion produced from an applied voltage of 1500 V may be seen, taken 4 hours after treatment. Numerically modeled, this lesion size was produced within the region of tissue experiencing an electric field of 423 +/-147 V/cm (average ± standard deviation). The results of the numerical model for this trial may be seen in Figure 2.
The average applied voltage to distance ratio between the plates for the frequency trials was 962 V/cm, corresponding to an average applied voltage and tissue thickness of 696.9 +/-141.7 V and 7.3 +/-1.5 mm, respectively. Lesions from these trials developed over 22 hours post-treatment, and were 2.5 cm in diameter on average (125% electrode diameter); with a minimum lesion of 2 cm occurring at 0.25 Hz and 936 V/cm, and maximum lesion of 3.2 cm occurring at 1.0 Hz and 950 V/cm. Though not dramatically significant, the results suggest that lesion sizes were on average greatest at 1 Hz and decreased as the frequency increased or decreased. The lesions which developed after treatments applied at 0.25 and 4 Hz were statistically smaller (a = 0.1) than those which developed for treatments applied at 1 Hz ( Figure 3). Future studies will investigate the role of pulse parameters such as repetition rate, duration, magnitude and number on lesion volume.
Analysis of the treated tissue reveals a uniform treatment region that extended cylindrically through the tissue with no visible damage distal to the treatment regions. This resulted in calculated treated volumes between 1.97 cm 3 and 6.37 cm 3 for corresponding tissue thicknesses of 0.628 and 0.792 cm.
Histological examination 24 hours post-treatment indicates that treated regions exhibit cell death (Figure 4b) compared to controls (Figure 4a). Hepatic acini in pigs are bordered by connective tissue, which contains blood vessels and biliary structures, and have a prominent cord architecture terminating in a hepatic venule. In areas adjacent to energy delivery, hepatic cell cords were well preserved, with mildly vacuolated hepatocytes (an expected finding at 24-hour ex vivo machine perfusion cycle). Sinusoidal structure in untreated areas is open, reflecting the flow of perfusate between hepatic artery/portal vein and hepatic vein. N-TIRE treatment disrupts hepatic cords and induces cell degeneration (Figure 4b). Preservation of major acinar features, including connective tissue borders and blood vessels, is evident. In zones of N-TIRE treatment, cell cords were indistinct and membranes lining sinusoids are fragmented to varying degrees.
Pigs, like humans, have substantial septation of liver acini by thin bands of fibrous connective tissue that run between portal triads. This macrostructure had an effect on the distribution of lesions induced by electroporation. Lesions are confined within structural acini in a manner that at the edges of the electroporation field acini with lesions could border normal or nearly normal acini. Thus, the bands of connective tissue act as insulation for the electrical pulsing, an important observation when considering procedures for treating focal liver lesions with electroporation or for evolving an intact connective tissue/duct/vascular matrix for subsequent tissue engineering. Figure 5a shows a portion of untreated porcine liver with normal sinusoidal cell cords arrayed from portal tracts to central vein. Cell morphology is well preserved. Some vascular congestion with red blood cells is noted and there is also mild centrilobular biliary stasis. Mildly damaged porcine acini are observed in regions subjected to electroporation from needle electrodes (Figure 5b). The center of the acinus shows disruption of cord architecture and some cell degeneration and clumping. A higher magnification view of this area is shown in Figure 5c, where cellular changes are more readily appreciated. These treated regions display mild lesions consisting of hepatocytic Administration of N-TIRE treatment, either with needle electrodes or with plate electrodes produced lesions in some hepatic acini that are distinctive. The severity of lesions within individual acini ranges from mild to moderately severe. Mild lesions consisted of small clumps of hepatocytes that detach from basal membranes. These cells show a loss of organization of fine intracellular structure and clumping of cytoplasm/organelles (Figure 5b-c).
Moderately severe lesions are readily discerned ( Figure 6). Cells affected by the N-TIRE procedure show varying degrees of cell swelling and karyolysis (Figure 6b). Within individual acini, most cells are affected. In some acini, frank nuclear pyknosis and cellular degeneration is seen, with small clumps of hyperchromic cells unattached from basal membranes. In some acini, centrilobular biliary stasis is noted, with aggregation of bile pigments in distal sinusoidal spaces. In all cases, as noted, bridging bands of connective tissue, with intact bile ducts and vascular structures are seen, even immediately bordering acini with significant N-TIRE-induced tissue damage.
The density score for control samples was 0.87 +/-0.0097 corresponding to approximately 87% of the histological tissue containing cells and extracellular material. Each treatment group had a statistically significant different density score versus the control (a = 0.01). The lowest density score of 0.509 +/-0.069, was obtained for N-TIRE treatments where pulses were applied at a rate of 1.0 Hz. Additionally, treatments applied at 1.0 Hz resulted in a statistically significant lower density score as compared to all other treatment groups (α = 0.05).
Discussion
To the best of our knowledge, this is the first work reporting the effect of non-thermal irreversible electroporation in an actively perfused organ. This effort is the first step towards creating decellularized tissue scaffolds that could be used for organ transplantation. This paper is aimed as a proof of concept to show that the cells may be removed, and therefore we targeted our study towards treating centimeter-scale regions of tissue. However, because N-TIRE procedures are dependent on the electric field to which a region of tissue is exposed, and thermal effects are mitigated by brief pulses with intervals between pulses, it is possible to scale up N-TIRE procedures to treat larger regions of tissue and organs.
The clearance of cellular debris was analyzed in this study using an image analysis algorithm as a preliminary method to determine the effectiveness of this technique. A more comprehensive study will include staining for primary and secondary antibodies, apoptotic markers, and DNA [16] and analysis of these samples via electron microscopy. Assays to determine the quantity of sulfated glycosaminoglycans, elastin, and collagen will be used as a measure of success of this method to preserve the important proteins in the extracellular matrix [13]. Additionally, biodegradation evaluation [13] and analysis of the vascular structure [14] must be completed before cell seeding and animal studies can be conducted.
The results reported here were localized to volumes of tissue up to 6.37 cm 3 for a single N-TIRE treatment. This can readily be expanded into much larger volumes by performing multiple treatments with the goal of creating decellularized structures for partial and full liver transplants. Analysis of the pulse repetition rate shows that the largest treated area and the lowest density score was achieved for a pulse frequency of 1 Hz. After 24 hours of perfusion, a maximum density score reduction of 58.5 percent had been achieved and cellular debris remained within the tissue construct. Since cell viability in the treatment regions was minimal, this is likely due to the combination of three factors; adhesion of cellular debris to the extracellular matrix, low physiological flow rates and pressures at the lobule level, and possible damage to the microvasculature by the N-TIRE treatments.
Although electroporation has been shown to preserve major blood vessels, vascular occlusion after electroporation has been reported in the literature under multiple treatment regiments including work done by Edd et al. [51] (a single 20 ms, 1000 V/cm pulse), Sersa et al. [52] (eight, 100 μs, 1300 V/cm pulses at 1 Hz) and Nuccitelli et al. [53] (three hundred, 300 ns, 40 kV/cm pulses at 0.5 Hz) and is reportedly due to two mechanisms. The first is a rapid onset of temporary vasoconstriction due to reflex vasoconstriction of vascular endothelial cells lasting between 1 and 3 minutes [54]. The second, slower mechanism is due to the disruption of the microfilament and microtubule cytoskeletal networks which are necessary for maintaining cell function and structure [55]. This decrease in blood flow has been observed lasting up to 4 to 8 hours after electroporation of in-vivo tumors [56] before partially recovering to normal physiological values after 24 hours [57,58]. Thus, electroporation induces profound but essentially transient and reversible decline in blood flow [56]. This phenomenon may have occurred during the course of perfusion ex-vivo, though it was not directly observed, and it may have an effect on the clearance of cellular debris from the vascular network.
Additionally, the branching network of vessels within the liver produces a system with low pressures and fluid velocities at the capillary level and within individual lobules. The combination of physiological geometry and the loss of fine capillary structure caused by the N-TIRE treatments may have resulted in local sheer stresses which were not significant enough to fully clear cellular debris from the tissue. Removal of this debris is essential in minimizing immune response of recipients. Future work will focus on optimizing treatment and perfusion protocols to minimize disruption of the microvasculature network while enhancing the clearance of debris. This may include the continuous application of sub 1000 V/cm pulses at 1 Hz and perfusion at higher than physiological pressures which we believe will enhance the decellularization process. Recently developed chemical decellularization processes require perfusion cycles of up to 72 hours [14] for the complete removal of cellular material from a rat liver matrix and extension of N-TIRE treatment and perfusion cycles to these durations may be necessary to achieve complete decellularization.
Both external plate electrodes and needles placed within the tissue produced clearly delineated regions of cell death. Plate electrodes produced circular surface lesions, which when appeared cylindrical in shape in sectioned samples and extended between the top and bottom electrodes. Sections of tissue treated with needle electrodes produced oval shaped surface lesions which extended through the tissue.
Needle punctures damaged the tissue structure and provided an alternative path for fluid to flow. Rather than returning through the vasculature, some perfusate escaped the organ through the punctures hindering the perfusion process. Due to this, treatment of an entire organ using needle electrodes is likely not possible and external electrodes appear to be the best method of inducing N-TIRE in large tissue volumes.
In N-TIRE areas, cell death was directly related to energy delivered. Close to electrode placements, over 90% of the cells were degenerate and in varying stages of lysis. In the more reversibly energized zone, cell disruption was 20-30% of cells. Other cells may have been degenerate or leaky, but not morphologically abnormal. We have observed that the machine perfusion system can mobilize large amounts of cellular debris, a significant benefit for tissue engineering.
In addition to producing decellularized tissue scaffolds, this method provides an ideal platform to study the effects of pulse parameters such as pulse length, repetition rate, and field strength on whole organ structures. Additionally, since we have direct control over the electrical properties of the perfusate, this could serve as a model for examining the effects of N-TIRE on diseased or cancerous organs with unique electrical or physical properties.
The development of engineered materials to replicate the structure and function of thoracic and abdominal organs has achieved only limited success. Large volumes of poorly-organized cells and tissues cannot be implanted due to the initial limited diffusion of oxygen, nutrients and waste [59,60]. Despite this, researchers have made some progress toward complete organ regeneration. For instance, mouse renal cells, grown on decellularized collagen matrices and implanted into athymic mice, developed nephron-like structures after 8 weeks [9]. In addition, five millimeter thick porous polyvinyl-alcohol (PVA) constructs, implanted in mice and then injected with hepatocytes, developed liver-like morphology over the course of one year [7]. However, cell survival and proliferation in each of these structures was limited to a few millimeters from a nutrient source [7].
The resulting scaffolds from N-TIRE plus perfusion maintain the vasculature necessary for perfusion into structures far beyond the nutrient diffusion limit that exists for non-vascularized structures. Since the temperature of the perfusate used can be as low as 4°C, thermal aspects associated with Joule heating are negligible. This provides an ideal platform in which to explore the effects on the cells and tissue of electric fields in isolation from the effects of thermal damage. Additionally, the low temperature of the organ compared to in vivo applications may allow for the application of much higher voltages to attain appropriate electric fields for decellularizing thicker structures without inducing thermal damage. This is important since the thickness of a human liver can exceed 10 cm in some regions.
When planning to decellularize tissues and organs undergoing active perfusion, the treatment region of decellularized tissue may be predicted through numerical modeling. From the lesion sizes and numerical model used here, when decellularizing an entire organ for a transplantable scaffold, the protocol should expose all of the tissue to an electric field of 423 +/-147 V/cm. This will ensure complete cell death, allowing comprehensive reseeding of the scaffold with the desired cells, thus minimizing the effects of recipient rejection. The threshold found here is slightly lower than the approximate 500-700 V/cm values reported in previous investigations [34,35]. This may be a result of the unique pulse parameters used (e.g., pulse number) or an inherent increased sensitivity of the cells to the pulses when under perfusion. The variability in the electric field threshold may result from the multiple inhomogeneous characteristics of the tissue anatomy and structure, such as the vascular system and tissue thickness, leading to lesions that were not perfectly circular.
The continuous active machine perfusion methods utilized here in the decellularization process may also be advantageous for recellularization. Once the decellularization process is complete, it should be possible to reseed the scaffold without risking damage attendant with removing the newly-created scaffold from the perfusion system. Since the arterial and venous supplies are individually addressable, multiple cell types can be delivered simultaneously to different regions of the organ. Similarly, retrograde perfusion through the biliary system may be the ideal pathway in which to deliver hepatocytes for the reseeding process.
Lesions seen microscopically are clearly indicative of a mechanism and morphology for cellular stripping using electroporation. It is very interesting that even at 24 hours, when using the N-TIRE parameters described here, there is only a modest loss of acinar architecture. More stringent conditions of energy delivery could likely alter this, but this might induce damage to important connective tissue and vascular structures. Addition of adjuvant cytotoxic agents, enzymes, and detergents in the perfusion fluid also might modulate the severity and temporal nature of cell stripping. Logically, it is much better to build on mild conditions, preserving important architecture for tissue engineering purposes, than to rapidly obliterate cells and stroma. The ability to manage the period of perfusion and conditions of perfusion with the cardioemulation system has clear advantages for this gradual, evolutionary approach to decellularization and eventual recellularization of liver.
Conclusions
This study investigated the ability to develop decellularized tissue scaffolds using N-TIRE on organs undergoing active perfusion. Porcine livers were harvested and placed under active mechanical perfusion while N-TIRE electrical pulses were applied using plate and needle electrodes. Livers were removed from the perfusion system and the resultant lesions and control regions were examined histologically and a density score improvement of 58.5 percent was observed. Through numerical modeling of the electric field distribution from the pulse applications, it was found that an N-TIRE threshold of 423 +/-147 V/cm may be used to predict the affected area. The continuous active machine perfusion method utilized during the decellularization process in this study provides the necessary platform for scaffold recellurization, a vital aspect required for practical organ transplantation techniques. | 2016-05-04T20:20:58.661Z | 2010-12-10T00:00:00.000 | {
"year": 2010,
"sha1": "01706267090a216143df4d4077b04ff18f8f0a58",
"oa_license": "CCBY",
"oa_url": "https://biomedical-engineering-online.biomedcentral.com/track/pdf/10.1186/1475-925X-9-83",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "01706267090a216143df4d4077b04ff18f8f0a58",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244464163 | pes2o/s2orc | v3-fos-license | Survey of nebulizer therapy for nasal inflammatory diseases in Japan before and during the COVID-19 pandemic
Objective: Nebulizer therapy is an effective and safe topical treatment for rhinosinusitis and is frequently used by otolaryngologists in Japan. However, treatment methods used vary among regions and according to doctors’ preferences. In this study, we aimed to investigate the use of nebulizer therapy for rhinosinusitis. Administration of nebulizer therapy has been affected by the coronavirus disease 2019 (COVID-19) pandemic. Thus, we also investigated the difference in the prevalence of nebulizer use before and during the pandemic. Methods: Between February and September 2016 and in January 2021, we administered questionnaire surveys on nebulizer treatment for rhinosinusitis to otorhinolaryngologists, who were members of the Oto-Rhino-Laryngological Society of Japan, in Aomori, Saitama, Mie, Fukui, Shiga, Okayama, and Kagoshima prefectures. Results: More than 90% of the otorhinolaryngologists performed nebulizer treatment for rhinosinusitis in 2016. In April 2020 (the first wave of the COVID-19 pandemic), the use rate decreased to 20%, but in January 2021, the use rate increased to 60%. Jet nebulizers were the most frequently used type. One-third of the otolaryngologists enlarged the natural opening of the paranasal sinuses in more than half of their patients by using vasoconstrictors. Cefmenoxime and betamethasone were the most commonly used antibiotics and steroids, respectively. Conclusion: Because it is important to perform nasal pretreatment and strict disinfection of nebulizer equipment, it is clear that education of otorhinolaryngologists as well as paramedical personnel is required to ensure safe and effective use of nebulizer therapy in Japan.
Introduction
Inflammatory nasal and paranasal sinus diseases, such as rhinosinusitis and allergic rhinitis, are common in daily clinical practice. Clinical treatment guidelines for acute rhinoshttps://doi.org/10.1016/j.anl.2021. 11.007 0385-8146/© 2021 Japanese Society of Otorhinolaryngology-Head and Neck Surgery, Inc. Published by Elsevier B.V. All rights reserved. inusitis have been published in Japan [1] ; these guidelines recommend enlarging the natural opening of the paranasal sinuses before using a nebulizer for mild cases. Nebulizer therapy has become a well-known treatment option throughout Japan and has been covered by public health insurance since 1958. Nebulizer therapy is an effective treatment based on a drug delivery system to the nasal and paranasal cavities. The therapy effectively increases the local drug concentration by promptly and uniformly delivering drugs to a targeted local site. The therapy is safe with low systemic absorption and with few adverse reactions.
However, the method of administration varies among otorhinolaryngologists. Appropriate choice of devices, medicine and pretreatment are important considerations for nebulizer therapy in the treatment of rhinosinusitis.
Since there has been no uniformity in the methods of nebulizer therapy to date, otorhinolaryngologists have developed their own methods of use. Therefore, we investigated the current status of nebulizer therapy for nasal inflammatory diseases in Japan. As nebulizer therapy has also been affected by the coronavirus disease (COVID-19) pandemic, we also investigated the difference in the frequency of nebulizer use before and during the pandemic.
Materials and methods
We administered questionnaires on the use of nebulizer therapy for rhinosinusitis to members of the Oto-Rhino-Laryngological Society of Japan in Aomori, Saitama, Mie, Fukui, Shiga, Okayama, and Kagoshima prefectures from February 2016 to September 2016. The included questions are listed in Table 1 .
In January 2021, we asked questions in the same prefectures of Japan. The first question was "Did you use nebulizer treatment in April 2020?", and the second was "Did you use nebulizer treatment in January 2021?".
Participants
This study involved 414 otorhinolaryngologists from 94 hospitals and 320 private clinics. The final response rate to the questionnaires was 52.6%. The response rates for each prefecture were as follows: 29 (44.6%) in Aomori, 79(31.1%) in
Seventeen otorhinolaryngologists did not use nebulizer therapy. Their reasons for not using nebulizer therapy were as follows. Five doctors believed that their clinics were understaffed to provide the treatment. Three doctors wanted to put more effort in performing surgery than in providing nebulizer therapy. Three doctors were not interested in providing nebulizer treatment, and three doctors believed that nebulizer therapy was ineffective for nasal inflammation. Three doctors also believed that nebulizer treatment was associated with a high risk of nasal infection.
In April 2020, nebulizer therapy was used by 20.9% of the otorhinolaryngologists. The highest frequency of use was found in Kagoshima (29.0%), and the lowest was in Okayama (16.0%). In January 2021, the treatment was used by 60.6% of the otorhinolaryngologists, and the lowest prevalence of use was found in Shiga (48.5%) ( Table 2 ).
Types of nebulizers used
Jet and ultrasonic nebulizers were used in the ear, nose, and throat (ENT) clinics (ENT, Otorhinolaryngology) of the seven prefectures. Seventy percent of otorhinolaryngologists used jet nebulizers, which suggested that jet nebulizers were standard for ENT clinics. The ratios of the nebulizer device in each prefecture are listed in Table 3 . The usage of ultrasonic nebulizers was higher than that of jet nebulizers only in Kagoshima (52.8%).
The devices were also classified by size into three types: installed, handy, and portable. The first is the drug-installed type, and the latter two (handy and portable) are drug-separate types. The drug-installed type was used more frequently than
Indications for nebulizer therapy
All otorhinolaryngologists performed nebulizer therapy for rhinosinusitis. Nebulizer therapy was also used for allergic rhinitis by 88.9% of the otorhinolaryngologists in the seven prefectures.
Ancillary equipment for nebulizer therapy
Mask and nosepiece types are ancillary equipment required for nebulizers. The nosepiece type was predominantly used over the mask type (82.7% vs. 17.3%). Nosepieces were made of plastic (59.6%), glass (38.1%), and silicone (2.3%) ( Table 4 ). Plastic nosepieces were predominantly used in all prefectures, except in Mie, where glass nosepieces were the most frequently used (64.1%).
Frequency of opening the natural ostium into the paranasal sinuses before nebulizer therapy
Otorhinolaryngologists enlarge the natural opening of the maxillary and ethmoid sinuses before nebulizing a patient with sinusitis. However, this pretreatment was performed differently among otorhinolaryngologists. The otorhinolaryngologists were divided into four groups (groups A-D). Among those who received nebulizer treatment, the doctors enlarged the natural opening in < 10% of the patients in group A, 11%-30% of the patients in group B, 31%-50% of the patients in group C, and > 51% of the patients in group D. The percentage of otorhinolaryngologists in group A was 37.4%; group B, 15.1%; group C, 15.1%; and group D, 32.7% ( Fig. 1 ). Saitama had the highest number of physicians who actively performed the opening procedures for the maxillary and ethmoid sinuses (group D: 57.7%). On the contrary, those in Fukui (group A: 58.8%) and Okayama (group A: 50.0%) were the most reluctant to perform opening procedures.
Medications for nebulizer therapy
Antibacterial agents were added to the nebulizer solution in 89.8% of all the ENT clinics. In Kagoshima and Saitama, 19.2% and 13.9% of the ENT clinics, respectively, did not use antibiotics ( Fig. 2 A). Cefmenoxime was the most commonly added antibacterial agent to a nebulizer solution in all prefectures. It was used in 79.2% of the clinics in Okayama and in 71.2% of the clinics in Kagoshima (highest rates among the prefectures). By contrast, the use rate in Saitama was 36.7% and that in Aomori was 42.4% (lowest rates among the prefectures). Panimycin was the second most commonly used agent (13.7%) in all prefectures. It was used in 24.2% of the clinics in Aomori, 22.2% in Fukui, 18.4% in Mie, 16.5% in Saitama, 13.6% in Shiga, 3.9% in Okayama, and 0% in Kagoshima. These findings indicate that the type of antibiotics used differed among the prefectures.
Corticosteroids were also added to the nebulizer solution in 79.8% of the ENT clinics. Betamethasone, which was used in 52.3% of the ENT clinics, had the highest frequency of use, followed by dexamethasone (19.3%) and prednisolone (3.6%). Notably, one-half of the ENT clinics in Aomori did not use corticosteroids for nebulizer treatment ( Fig. 2 B).
Safety management: washing, sterilization, and disinfection of devices
Nosepieces and masks were disinfected after each patient use in 88.2% of the ENT clinics. Some ENT clinics used disposable nosepieces and masks. Disinfectants (67.5%), autoclaves (16.8%), boiling sterilization (4.2%), and gas sterilization (2.5%) were used to disinfect the nosepieces and masks.
Disinfection of hoses was performed in 68.9% of the ENT clinics daily (including every patient and every half day), whereas it was not performed in 19.2% of the ENT clinics ( Fig. 3 A). The main units were not washed or disinfected in 50.4% of the institutions ( Fig. 3 B). They were washed and disinfected after each patient in 5.5% of the institutions, every half-day in 1.9% of the institutions, and daily in 25.5% of the institutions. More than 50% of the institutions in Fukui (66.7%), Okayama (54.8%), Mie (53.4%), and Aomori (52.6%) did not wash or disinfect the main units.
Applying disinfectants (66.6%) was reported to be the most common practice for disinfection of the inner and outer hoses, followed by washing alone (18.8%) and autoclaving (7.4%) ( Fig. 4 ).
Discussion
In this study, we confirmed the use of nebulizer therapy for nasal inflammatory diseases, such as sinusitis and allergic rhinitis, in Japan before the COVID-19 pandemic. More than 90% of the ENT institutions used nebulizer therapy; those not using nebulizer therapy were focused on palliative medicine and surgery. Nebulizers allow an effective and easy drug delivery to inflamed sinuses [2] and efficacy has been reported in European countries [3] . In recent years, efficacy as a postoperative treatment for endoscopic sinus surgery has been reported [4] . Also, nebulizer therapy has been considered to be a safe treatment without adverse events. However, therapy methods vary with regard to the drugs and equipment used. Nebulizer therapy is often administered as a customized treatment by each ENT clinic in Japan.
The three main types of nebulizers are jet nebulizers, which use compressed air, ultrasonic nebulizers, which use highfrequency piezoelectric quartz vibrations, and mesh nebulizers, which use the vibration of a microperforated mesh [5] . According to our study, jet nebulizers are used more frequently than ultrasonic nebulizers in Japan. However, the preferred type of nebulizer differed between regions, and ultrasonic nebulizers were used slightly more frequently in Kagoshima, Osaka prefecture. Jet and ultrasonic nebulizers differ in their aerosol generation, but their differences are not clearly elucidated. Previous studies reported that ultrasonic nebulizers can deliver more drugs to the paranasal sinuses than jet nebulizers because the former can generate smaller particles than the latter. However, recent developments in nebulizer devices have made this difference smaller [ 6 , 7 ].
In nebulizer therapy, the drugs should be delivered without waste to the target site and absorbed to exert their positive effects. To achieve this, patient's nasal tissues need to be prepared to absorb the drug. The presence of viscous nasal discharge prevents nebulized drugs from being absorbed at the target site, and obstruction of the middle nasal meatus reduces the amount of drug that can reach the orifice of the paranasal sinuses or the middle nasal meatus. A previous study reported that a natural ostium of the maxillary sinus with at least 3 mm in diameter is required to allow the drugs to reach the maxillary sinus [2] . Therefore, nasal pretreatment, cleaning and opening of the natural ostium into the middle nasal meatus are important before initiating nebulizer therapy [ 8 , 9 ]. Therefore, we obtained information from the medical institutions about the percentage of patients who underwent opening of the natural ostium into the paranasal sinuses before undergoing nebulizer treatment. The results showed that 31.2% of the institutions performed this pretreatment in more than half of their patients. In some prefectures, pretreatment was carried out in 10% or fewer patients. Because it is recommended to enlarge the natural opening of the paranasal sinuses before nebulizer therapy, it is necessary to raise otorhinolaryngologists' awareness of this requirement for pretreatment.Various types of antibiotics have been used for nebulizer therapy. In Japan, cefmenoxime was approved for nebulizer therapy in 1996 and its use has increased in recent years. Betamethasone is the preferred steroid drug because of its stability over a long period of time.
The COVID-19 pandemic has affected the performance of nebulizer therpy in Japan. Aerosol-generating procedures include nebulizer treatment, as well as endotracheal intubation, bronchoscopy, and open suctioning. Some of these procedures, such as intubation, open suctioning, tracheotomy, manual ventilation, and bronchoscopy, can significantly increase the risk of bioaerosol production, possibly containing pathogens. Other procedures, such as nebulizer treatment, high-flow nasal oxygen, and use of medical aerosols, potentially disperse bioaerosols from the patient to the surrounding area, but no evidence exists on their additional potential to generate contaminated aerosols [10] .
The viral pandemic of COVID-19 has raised concerns on the use of nebulization. Many guidances and statements have been reported for physicians on the role and use of nebulization in the current pandemic, based on current evidence and understanding [11] . Many of the reports are for lower respiratory tract disease.
There are many opinions about the use of nebulizer devices, but there is not much mention of the use of nebulizer devices for the upper respiratory tract such as nasal inflammatory disease. In our study, the use of nebulizer therpy was found to have decreased during the COVID-19 pandemic in Japan; the prevalence rate decreased to 20.9% during the first wave of the pandemic (April 2020). We believe that many medical institutions are resuming nebulizer therapy for nasal sinus disease while taking measures to prevent COVID-19 infection. Since the infection form of coronavirus is mainly droplet / contact infection, measures such as social distance, ventilation, disinfection of equipment, and other preventive measures are considered to be important. [12] With regard to equipment management, nebulizers, which are heavily contaminated with microorganisms, are potential sources of infection. Therefore, they should be disinfected following proper procedures. Most medical institutions surveyed were private otorhinolaryngology clinics and disinfectants were used primarily for disinfecting equipment. Nosepieces and masks, which are directly attached to patients, were thoroughly disinfected. However, inner and outer hoses and main units were insufficiently washed, disinfected, and sterilized. Most bacteria detected on the nebulizer equipment are non-glucose-fermenting gram-negative bacilli, which form biofilms and are resistant to disinfectants [13] . Therefore, otorhinolaryngologists as well as paramedical personnel should be educated to appropriately disinfect the nebulizer equipment.
The limitation of this study is the low response rate, which may not reflect national results. However, in Japan, the region was selected in consideration of the difference in the incidence of COVID-19. I would like to make it an issue for future study.
Conclusion
Our investigation of the status of nebulizer therapy, which is commonly used for rhinosinusitis in Japan, before and during the COVID-19 pandemic, revealed several areas of concern. Because it is important to perform nasal pretreatment and strict disinfection of nebulizer equipment, it is clear that education of otorhinolaryngologists as well as paramedical personnel is required to ensure safe and effective use of nebulizer therapy in Japan. | 2021-11-22T14:09:09.888Z | 2021-11-01T00:00:00.000 | {
"year": 2021,
"sha1": "57faadd65246dc82b474cae0e440bf1571dde095",
"oa_license": null,
"oa_url": "http://www.aurisnasuslarynx.com/article/S0385814621002698/pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "4af690183dbfa290ec7eb4279e0ece0b77972747",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
11935596 | pes2o/s2orc | v3-fos-license | Formation of gas-phase carbonyls from heterogeneous oxidation of polyunsaturated fatty acids at the air – water interface and of the sea surface microlayer
Motivated by the potential for reactive heterogeneous chemistry occurring at the ocean surface, gas-phase products were observed when a reactive sea surface microlayer (SML) component, i.e. the polyunsaturated fatty acids (PUFA) linoleic acid (LA), was exposed to gas-phase ozone at the air–seawater interface. Similar oxidation experiments were conducted with SML samples collected from two different oceanic locations, in the eastern equatorial Pacific Ocean and from the west coast of Canada. Online proton-transferreaction mass spectrometry (PTR-MS) University of Colorado light-emitting diode cavity-enhanced differential optical absorption spectroscopy (LED-CE-DOAS) were used to detect oxygenated gas-phase products from the ozonolysis reactions. The LA studies indicate that oxidation of a PUFA monolayer on seawater gives rise to prompt and efficient formation of gas-phase aldehydes. The products are formed via the decomposition of primary ozonides which form upon the initial reaction of ozone with the carbon–carbon double bonds in the PUFA molecules. In addition, two highly reactive dicarbonyls, malondialdehyde (MDA) and glyoxal, were also generated, likely as secondary products. Specific yields relative to reactant loss were 78 %, 29 %, 4 % and < 1 % for n-hexanal, 3-nonenal, MDA and glyoxal, respectively, where the yields for MDA and glyoxal are likely lower limits. Heterogeneous oxidation of SML samples confirm for the first time that similar carbonyl products are formed via ozonolysis of environmental samples.
Introduction
The ocean covers more than 70 % of the Earth's surface, and the sea surface microlayer (SML) is an important boundary which plays a crucial role in chemical exchange between the atmosphere and ocean (Donaldson and George, 2012).Recent field observations indicate missing sources for oxygenated hydrocarbons from the oceans in atmospheric models (Myriokefalitakis et al., 2008;Sinreich et al., 2010).It is known that the SML is a complex organic and inorganic mixture (Pogorzelski and Kogut, 2003;Kozarac et al., 2005).The organic substances in the SML, including proteins, polysaccharides, humic-type materials and lipids, are likely produced from marine biota (Wilson and Collier, 1972;Gašparovic et al., 1998).As a main component of lipids, fatty acids (FA), including polyunsaturated FA (PUFA) have been detected in considerable amounts in sea surface water with concentrations of 3-200 µg L −1 (Marty et al., 1979;Derieux et al., 1998;André et al., 2004;Parrish et al., 2005;Blaženka et al., 2007).PUFA contribute as much as ∼43 % of the total FA measured, for example, in sub-Arctic Norwegian fjords Published by Copernicus Publications on behalf of the European Geosciences Union.S. Zhou et al.: Formation of gas-phase carbonyls from heterogeneous oxidation (Blaženka et al., 2007).In addition, PUFA have also been detected in marine aerosols (Kawamura and Gagosian, 1987;Mochida et al., 2002;Fang et al., 2002).
As surfactants with carbon-carbon double bonds, PUFA reside at the air/water interface and provide information on the degradation state of the organic matter in the SML and aerosols (Blanchard, 1964;Barger andGarrett, 1970, 1976;Donaldson and Vaida, 2006).Despite their environmental prevalence the atmospheric importance of PUFA remains poorly characterized.In particular, as a significant component of the SML, are these species reactive with atmospheric oxidants when present on the water surface?As a result of oxidation, what species are formed and are the products readily released to the atmosphere?
To date, the focus of laboratory studies in this area has been on the kinetics of heterogeneous oxidation of pure unsaturated FA compounds, especially on oleic acid (OA) (Zahardis and Petrucci, 2007).Besides studies on pure FA liquids and aerosols which indicate reactive uptake coefficients on the order of 10 −3 , notable studies include those of McNeil et al. (2007), who reported an uptake coefficient of roughly 10 −5 for ozone on oleate-aqueous salt aerosols and González-Lebrada et al. (2007), who measured 10 −6 for monolayer OA on aqueous droplets.These values are comparable with the initial uptake coefficients for ozone on terminal alkenes on self-assembled monolayer (Dubowski et al., 2004).Based on additional measurements involving the OH and NO 3 oxidants on oleate-aqueous salt particles by Mc-Neil et al. (2007), the lifetime of unsaturated organics at the air-water interface can be estimated to be ∼10 min with respect to reaction with atmospheric levels of ozone (50 ppb), whereas the lifetimes with respect to OH reaction would be significantly longer (McNeil et al., 2007;González-Lebrada et al., 2007).Kinetics studies with PUFA such as linoleic acid (LA) and linolenic acid yield comparable results to those with OA (Moise and Rudich, 2002;Thornberry and Abbatt, 2004;Zhao et al., 2011).
Moreover, gas-phase aldehyde formation has been widely reported from studies with pure substrates (Zahardis and Petrucci, 2007, references therein).Among them, Moise and Rudich (2002) first quantified a yield of gas-phase nonanal as 28 % from O 3 + OA at room temperature, and ∼50 % yields were reported by Thornberry and Abbatt (2004) and Vesna et al. (2009).Thornberry and Abbatt (2004) also reported nhexanal and nonenal yields of ∼25 % for the ozone reaction with LA thin film, suggesting that the two LA carbon-carbon double bonds have the same probability to be attacked by ozone.
In addition to gas-phase aldehydes, several lower-volatility products, namely azelaic acid, nonanoic acid and 9oxononanoic acid, have been quantified from ozonolysis of pure OA, with 9-oxononanoic acid the most prevalent with yields of 14-35 % (Katrib et al., 2004;Hung et al., 2005;Ziemann, 2005), while azelaic and nonanoic acid yields are less than 10 % (Katrib et al., 2004;Ziemann, 2005).It is notable that these yields are different from King et al. (2009), who reported 87 % for nonanoic acid from heterogeneous ozonolysis of a monolayer of deuterated OA on water.This latter result implies that the heterogeneous ozonolysis mechanism may be different for an FA monolayer at the air-water interface than for a pure substance.
The goals of this work are threefold.First, given the dominance of product studies performed with pure substances, there is the need to determine the gas-phase products that form from heterogeneous oxidation of PUFA when present as a monolayer on an aqueous substrate, to better match conditions at the air-seawater interface in the marine boundary layer (MBL).Of particular interest is whether soluble species are promptly released to the gas phase or whether they dissolve instead in the underlying aqueous medium.Measurements are performed with LA monolayers sitting on seawater in a flow tube coupled to online proton transfer reaction mass spectrometers (PTR-MS) and a light-emitting diode cavityenhanced differential optical absorption spectrometer (LED-CE-DOAS).Second, we investigate whether or not highly reactive dicarbonyls can be formed from this heterogeneous reaction.Finally, we investigate whether similar oxygenated volatile organic carbons (VOCs), especially carbonyls, form when natural SML samples are exposed to ozone.To the best of our knowledge, this is the first study of the heterogeneous oxidation of natural SML materials.
Flow tube apparatus and detection schemes
Heterogeneous ozonolysis of LA on seawater was performed at room temperature (296 ± 3 K) in a flow tube apparatus shown in Fig. 1.A LA monolayer was prepared by adding either 2 µL of pure LA (≥ 99 %, Sigma-Aldrich) (Type 1 experiments) or 2-4 µL of a LA-dichloromethane (DCM, HPLC grade, ≥ 99.9 %) solution (1.6 × 10 −2 M) (Type 2 experiments) onto 10 mL commercial seawater (Sigma-Aldrich) which had been added into a glass boat (2 cm wide and 20 cm long) prior to addition of LA.The glass boat was placed inside a 2.2 cm i.d., 50 cm long glass flow tube with two inlets.In Type 1 experiments, an oily drop of LA can be visually seen after the pure LA was added on the seawater, while it was not observed in the Type 2 experiments, where the DCM was evaporated by passing clean air through the flow tube for 15 min after LA-DCM solution was added onto the seawater.Ozone was introduced into the flow tube from one inlet and the flow tube outlet was connected to the analytical instruments (Fig. 1a).The relative humidity at the exit of the flow tube was ∼40 %.
Ozone was generated by passing 1000 sccm of synthetic air through an ozone generator that was composed of a quartz cell and a Pen-Ray lamp with a metal cover, which regulated the ozone formation rate.The ozone mixing ratio was measured by a UV photometric O 3 analyzer (Thermo Model 49i) and by the LED-CE-DOAS.A unit resolution PTR-MS (Ionicon Analytik GmbH) and an LED-CE-DOAS instrument were placed downstream of the flow tube to detect the gas-phase products, with 100 sccm going to the PTR-MS and 500 sccm to the LED-CE-DOAS.The residence time for ozone as well as gas-phase products in the flow tube is ∼10 s.
The proton transfer reaction mass spectrometry (PTR-MS) has been described in detail by de Gouw and Warneke (2007).Briefly, the instrument utilizes a soft chemical ionization technique, transferring H + from the reagent ion, H 3 O + , to gas-phase species with a higher proton affinity than water.Here, the PTR-MS was run under either scan mode, in which the PTR-MS recorded multiplier signal in the m / z range of 21 to 150, or selected ion mode (SIM), in which only the signals from the masses of interest were recorded.
For the SML and some LA studies, a PTR-TOF-MS with a time-of-flight mass spectrometer was used along with a switchable reagent ion source (SRI, including H 3 O + and NO + ) (Jordan et al., 2009).The SRI enables identification of VOC isomers indistinguishable with H 3 O + ionization, e.g.aldehyde and ketone isomers (Dunne et al., 2012).In this work, NO + was used to differentiate aldehyde and ketone isomers, because NO + reacts with aldehydes to produce mainly dehydrogenated cations (M-H) + , whereas its reaction with ketones yields NO + cluster ions (M + NO) + (Jordan et al., 2009).
The University of Colorado LED-CE-DOAS instrument (Thalman and Volkamer, 2010) employs a blue LED light source (420-490 nm) coupled to a high finesse optical cavity consisting of two highly reflective mirrors (R = 0.99997) placed about 92 cm apart, leading to a wavelength-dependent sampling path length of ∼18 km.The mirrors are purged with dry nitrogen gas, and the mirror reflectivity was determined by flowing helium and nitrogen gas; the mirror alignment is further monitored online by observing the slant column density of oxygen dimers (O 4 ).The light exiting the cavity is projected onto a quartz optical fiber coupled to an Ocean Optics QE65000 spectrometer equipped with a CCD detector.The spectra recorded are stored on a computer and analyzed by means of DOAS least squares fitting.The measured concentrations are calibrated from knowledge of reference spectra of glyoxal (Volkamer et al., 2005a), nitrogen dioxide (Vandaele et al., 2002), ozone (Bogumil, 2003) and oxygen dimmers (Hermans, 1999).The concentration for ozone was retrieved using an absolute intensity fitting procedure (Washenfelder et al., 2008) as the dominant absorption of ozone at these wavelengths is mostly broadband.Detection limits for glyoxal, NO 2 , O 4 , and ozone were 20 pptv, 25 pptv, 0.01 % mixing ratio, and 30 ppbv respectively (Thalman and Volkamer, 2010).
To quantify the gas-phase products, the PTR-MS was calibrated with four aldehydes, namely propanal, 3-hexenal, nhexanal and 2-nonenal, which was used as a surrogate for 3-nonenal, which was commercially unavailable.The calibration was performed by introducing a known amount of either pure liquid sample or n-heptane solutions into a 1m 3 Teflon chamber.n-heptane was used as a solvent to dissolve the aldehydes because it had minimum interference with the selected m / z signals for the above-mentioned aldehydes.In addition, the n-hexanal PTR-MS calibration from the chamber was verified by introducing a known pressure of n-hexanal into an evacuated 3 L glass reservoir, which was then diluted with nitrogen.The n-hexanal PTR-MS signal was calibrated using flow tube mixing ratios calculated from the pressure drop with time of the glass reservoir, as the flow was metered out through a needle value.The PTR-TOF-MS was calibrated by diluting ppm level gas standards from a gas cylinder containing acetaldehyde, acrolein, acetone and 2-butanone (Ionimed Analytik GMBH).
Using the apparatus in Fig. 1 and the PTR-TOF-MS with H 3 O + as reagent ion, gas-phase products from the heterogeneous reaction of ozone, generated by passing 500 sccm of synthetic air through the ozone generator, with SML samples were investigated.The SML samples were collected in two locations, one from Patricia Bay, Canada (SML-CA), and the other from the eastern equatorial Pacific Ocean (SMLE-qPOS).Detailed information about the SML sample collection is given in the Supplement.After the SML samples were collected they were transported in dry ice and kept frozen in the laboratory at −18 • C. The SML samples thawed at room temperature and 10 mL were added into the glass boat inside the flow tube.Control experiments were conducted in a manner analogous to the oxidation experiment without the SML sample present.
Chamber experiments
As will be shown later, malondialdehyde (MDA) and glyoxal were observed as products from ozonolysis of LA.The formation of MDA and glyoxal is mostly attributed to a secondgeneration product from reaction of 3-nonenal with ozone.To prove this hypothesis, the gas-phase reaction of ozone with 3-hexenal, used as a surrogate for commercially unavailable 3-nonenal, was conducted in the 1 m 3 Teflon chamber.
The chamber was made of PFA (perfluoroalkoxy) Teflon film with a Teflon-tape-wrapped metal frame inside to support the bag.The chamber was flushed overnight by purified air before starting the experiment.Approximately 200 ppb cis-3hexenal (50 % in triacetin, Sigma-Aldrich) and ∼200 ppm cyclohexane (HPLC grade, ≥ 99.9 %) were added into the chamber before ∼300 ppb ozone was introduced.Cyclohexane was present in excess to scavenge more than 95 % of OH radicals produced in the reaction.The reactants and products were detected by the PTR-MS and LED-CE-DOAS.
LA monolayers on seawater
In the Type 1 experiments, it is expected that the drop of LA will spread to produce a monolayer over the seawater surface.To test this, a control experiment was conducted by adding 2 µL pure LA into a clean glass boat without seawater and passing ozone through the flow tube at the same levels as used in the oxidation experiments.
The product signals recorded by the PTR-MS for the control were more than an order of magnitude lower than those for the Type 1 experiments, confirming that the LA does spread across the seawater surface but does not spread over a dry glass boat.
Indeed, Rouviére and Ammann (2010) investigated the monolayer properties of fatty acids coated on deliquesced KI particles and reported that, if more than a monolayer is deposited, the fatty acid formed a monolayer and the residual remained as an excess droplet in contact with the aqueous solution.It is expected that this is the case for the Type 1 experiments in this work.
Identification of gas-phase aldehydes from ozonolysis of monolayer LA
Figure 2 shows a typical mass spectrum for the gas-phase products from Type 1 experiments recorded by the PTR-MS under scan mode (a) and the evolution of selected ions recorded by PTR-TOF-MS (b).Based on previous studies (Moise and Rudich, 2002;Thornberry and Abbatt, 2004) m / z 141 and 123 in Fig. 2a and 141.13 (C 9 H 17 O + ) and 123.12 (C 9 H + 15 ) in Fig. 2b were identified as protonated 3-nonenal (M+1) and its dehydrated ions (M-18+1), respectively.m / z 101 and 83 in Fig. 2a and m / z 101.10 (C 6 H 13 O + ) and 83.08 (C 6 H + 11 ) in Fig. 2b are attributed to protonated n-hexanal (M+1) and its dehydrated ions (M-18+1), respectively.The signals at m / z 123 and 83 from PTR-MS were used to quantify 3-nonenal and n-hexanal, respectively.Other m / z in Fig. 2 were from further decomposition of protonated or dehydrated 3-nonenal and n-hexanal, with exception of the signal at m / z 73 in Fig. 2a.
Several aldehyde and ketone isomers -e.g.butanal, butanone, methyl glyoxal or MDA -have the same nominal unit mass molecular weight and may produce protonated molecular ions at m / z 73 in the PTR-MS.Therefore, absolute identification of this product with PTR-MS alone is impossible.However, Fig. 3a gives the PTR-TOF-MS mass spectrum of this product with H 3 O + as reagent ion, showing the exact mass to be 73.02(M+1), suggesting its molecular formula is C 3 H 4 O 2 (note the different scales in Fig. 3a with ozone off and on).Therefore, the monocarbonyls, butanal and butanone (C 4 H 8 O), can be excluded from the candidate products.In Fig. 3b the reagent ion is switched from H 3 O + to NO + just after run 30.The m/z 73.02 (C 3 H 5 O + 2 ) signal was highest with H 3 O + as the reagent ion, with the signal dropping to background level with NO + .Concurrently, the signal at m / z 71.01 (C 3 H 3 O 2 +) rises, indicating H-atom abstraction from the molecular ion C 3 H 4 O 2 , while signal at m / z 102.02 (C 3 H 4 O 2 NO + ) remains unchanged.This is consistent with the accepted mechanism by which NO + reagent ions react with aldehydes.Note that methyl glyoxal can be ruled out as a candidate because a sample of this gas behaves differently with the NO + reagent ion (Fig. S1) than the behavior displayed in the experiment; i.e. the m / z 102.02 (C 3 H 4 O 2 NO + ) signal rises substantially when the reagent ion is switched from H 3 O + to NO + .Hence, in combination with the mechanistic study to be described later, this product is attributed to MDA.
Finally, coincident with the products that are detected by PTR-MS, we note that glyoxal was observed to be formed simultaneously using the LED-CE-DOAS (Fig. 4).As mentioned in the Introduction, the 3-nonenal and nhexanal products have been identified in previous studies (Moise and Rudich, 2002;Thornberry and Abbatt, 2004) on the heterogeneous oxidation of pure LA thin film/particles by ozone.The present work demonstrates that the same products are formed when monolayer LA is oxidized on seawater.However, the formation of two gas-phase dialdehydes, namely MDA and glyoxal, was observed for the first time.
Gas-phase aldehyde quantification
Figures 4 and 5 illustrate the temporal profiles of ozone and gas-phase aldehyde formation from Type 1 and 2 experiments, respectively.Upon ozone exposure, n-hexanal (m / z 83) and 3-nonenal (m / z 123) formed immediately and quickly reached steady-state levels in Type 1 experiments followed by slightly slower formation of the dicarbonyls, i.e.MDA (m / z 73) and glyoxal (Fig. 4).
We interpret the steady-state, sustained production of volatile products in Type 1 experiments (Fig. 4) to indicate that the LA monolayer is being replenished by the excess LA droplet.In contrast, the Type 2 experiments, where only 3.8 × 10 16 molecules (or 1 × 10 14 molecules cm −2 ) of LA were added to the boat, demonstrate the consumption of the surface coverage.In both cases, we believe that the less volatile reaction products, potentially acids or hydroxyhydroperoxides (Katrib et al., 2004;Hung et al., 2005;Ziemann, 2005), dissolve in the water or remain at the surface.On the one hand, Voss et al. (2007), who investigated the reaction of ozone with OA monolayer on water, suggested that non-volatile products dissolved into the aqueous phase and did not remain at the interface.In contrast, King et al. (2009) investigated the similar reaction with deuterated OA at the air/water interface and reported that the nonanoic acid product remained at the surface.This latter scenario seems unlikely in the present work; otherwise the LA monolayer would have been diluted by the condensed-phase products at the interface and the product signals in the Type 1 experiments would have decreased in intensity as the LA oxidation processed.Tables 1 and 2 summarize the quantification of the gasphase aldehyde yields from Type 1 and 2 experiments, respectively.The product yields in Type 1 experiments were quantified relative to the amount of ozone consumed.In Type 2 experiments, the products were quantified by integrating the individual product peaks in Fig. 5, and the yields are reported relative to the amount of LA consumed on the seawater.The uncertainties in Tables 1 and 2 reflect variability in the experiments.The largest systematic errors in the yields arise from the PTR-MS calibrations which, in combination with other uncertainties, are estimated to be on the order of ± 20 %.The uncertainty in the glyoxal measurements is estimated to be ± 10 %.Because the individual yields from the different experiment types are in excellent agreement, we report average yields for n-hexanal, 3-nonenal, and glyoxal of 78 %, 29 %, and 0.04 %, respectively.
Given the fact that many research papers report reactive halogen production from the reactions of ozone with aqueous halide solution (Saiz-Lopez and von Glasow, 2012, and references therein), one may inquire whether the ozone reaction with the seawater in this work affected the product yield measurements.To monitor the ozone reaction with the Sigma-Aldrich seawater, test experiments were conducted using a set-up similar to that for the Type 1 experiments without LA addition.The results show that only ∼2 ppb of ozone reacted with the artificial seawater.Compared to the total ozone loss due to reaction with LA in Type 1 experiments (85-150 ppb) (Table 1), this loss due to reaction with the seawater is negligible.Therefore, we conclude that, while the contribution of halogen chemistry to aqueous or gas-phase products cannot be ruled out, compared to the ozone reaction its contribution will not affect the product distributions at a significant level.Indeed, we believe that the chemistry would likely occur in an independent manner, with hypohalous acids forming from the reaction of ozone with halides, which will then rapidly form dihalogens that will degas from the solutions.
We note, however, that glyoxal is a highly soluble substance (effective Henry's law constant, H eff = 4.19 × 10 5 M atm −1 ; Ip et al., 2009) and may dissolve into the condensed phase, while some fraction evaporates to the gas phase.As a result, we can only infer semi-quantitative information about the formation of glyoxal, and the yields reported in Tables 1 and 2 are likely to be lower limits.Given that other gas-phase products account for close to 100 % of the reactants consumed, the glyoxal is not expected to be a major product.Nevertheless, we note that corrected yields for glyoxal in the flow tube experiments could be more than an order of magnitude larger than the yields listed in Tables 1 and 2, if the chemistry at the seawater surface is different from that in the Teflon chamber.In addition, we note that the mechanism by which glyoxal may enter the condensed phase may be affected by the presence of the organic surface layer.In particular, molecular dynamics calculations show that organic monolayers can affect the interactions of molecules like O 3 with an aqueous sub-phase.These calculations support that the net-collision rate in the presence of a thin organic layer (butanol, on NaI(aq)) is virtually identical to that in the absence of the organic layer (D.Tobias, personal communication, 2013).It is likely that similar trapping applies to other molecules, e.g.glyoxal.However, the limited knowledge of glyoxal uptake and hydration kinetics impedes our full understanding of its loss processes in the flow tube.Nevertheless, given glyoxal's high solubility in water, we believe that the correction factors for glyoxal have the potential to be large.
Preparation of known gas-phase levels of MDA is not easily achieved, and so this signal remains uncalibrated directly.Instead, we estimate the yields of MDA to be 4 ± 3 % based on the average of the calibration factors for other aldehydes, or using an indirect approach described below.The values plotted in the figures represent those from the indirect approach.However, we note that the same caveat described above for glyoxal also applies for MDA, given that it is expected to be highly soluble; i.e. the yields are expected to be lower limits.
While the 3-nonenal yield is similar to that from the study of the oxidation of pure LA by Thornberry and Abbatt (2004), the n-hexanal yield is higher by roughly a factor of 3. As mentioned above, the n-hexanal PTR-MS signal was calibrated with two different methods, i.e. using known mixing ratios prepared in a chamber and in the glass bulb/flow tube, and both calibration procedures resulted in similar re-sults.As well, the similarity between the results from the Type 1 and Type 2 experiments suggest there is no error in the amount of reactant consumed.Thus, we conclude that the n-hexanal yield from oxidation of LA molecules is different when present as a monolayer on water as compared to the pure form, as studied by Thornberry and Abbatt (2004).The monolayer LA at the air-water interface is likely arranged with the hydrophilic headgroups (-COOH) in direct contact with the aqueous phase and the carbon-carbon double bonds containing hydrophobic tails oriented towards the air (Gill et al., 1983;Ellison et al., 1999;Donaldson and Vaida, 2006).As a result, the C12-C13 carbon-carbon double bond will be farther from the interface and may exhibit less steric hindrance than the C9-C10 bond in reactions with O 3 diffusing from the gas phase (Fig. 6).In contrast, when a pure LA liquid is oxidized, the two double bonds in LA may be randomly oriented and exhibit the same collision frequencies with O 3 .
Reaction mechanism for ozonolysis of LA monolayers
As shown in Fig. 6, the heterogeneous reaction between unsaturated FA and ozone is believed to proceed via addition of ozone to the carbon-carbon double bond forming primary ozonides (PO).The decomposition of the PO leads to aldehydes and the Criegee biradical, which further reacts with water to form acids, hydroxyhydroperoxides, or with carbonyls to form secondary ozonides (Wadia et al., 2000;Moise and Rudich, 2002;Hung and Ariya, 2007;Vesna et al., 2009).
The gas-phase product yields indicate that roughly 100 % of the reactants (i.e.either ozone or LA) are converted to the two major products, n-hexanal and 3-nonenal.This is in contrast to the results from oxidation of liquid LA by Thornberry and Abbatt (2004), who reported 50 % aldehyde yield.As well, they saw roughly equal yields of n-hexanal and 3-nonenal.
In the work of Thornberry and Abbatt (2004), the residence time of products and O 3 in the flow tube was on the order of 0.1 s, whereas in this work the timescale is ∼10 s.Hence, further oxidation of 3-nonenal by ozone may produce n-hexanal leading to the higher ratio of n-hexanal to 3-nonenal; i.e. it is possible that both are formed initially at roughly 50 % yield and secondary reactions give rise to the observed enhancement of n-hexanal to 3-nonenal.While it is unlikely that the gas-phase kinetics is fast enough to drive this oxidation pathway, it is possible that 3-nonenal is heterogeneously oxidized to n-hexanal.While speculative, it is for this reason that we indicate the branching ratios in Fig. 6 to the two primary ozonides to be 50 to 70 % and 30 to 50 %.The roughly 100 % total yield of 3-nonenal and n-hexanal suggests that the PO decompose exclusively to gas-phase n-hexanal and 3-nonenal and corresponding Criegee intermediates (CI) that are likely formed on the side of the LA molecule containing a carboxylic acid function group.The fact that the ozonide does not decompose with equal probability into two different sets of aldehydes and CI is somewhat surprising, and it is the reason that we investigated the yields in two different experiments, i.e. the Type 1 and Type 2 experiments, and why we calibrated the PTR-MS to n-hexanal via two independent methods.The consistency of the results is excellent.We conclude that the conformation of the LA molecule when existing as a monolayer at the air-water interface preferentially favors decomposition of the ozonide in the manner indicated.While there are a number of studies on the effects of chemical structures on the decomposition pathways of the gas-phase ozonides formed from ozonolysis of alkenes, little is known about the decomposition mechanisms for the ozonides formed from heterogeneous reactions.We can only speculate that perhaps the CI shown to be formed in Fig. 6 might be favored because it has a higher water solubility than the other CI that could form, and perhaps the excited CI is more easily stabilized by the interaction with liquid water.More experimental and theoretical studies are needed to investigate this mechanism.This is the first study to probe the reaction mechanism for the heterogeneous reaction of O 3 with a PUFA at the air/water interface.As mentioned previously, King et al. (2009) reported a much higher yield of nonanoic acid (87 %) from heterogeneous reaction of ozone with monolayer deuterated OA at the air/water interface compared to previous studies with pure OA thin films or particles (∼10 %; Katrib et al., 2004;Ziemann, 2005); i.e. few volatile aldehydes are expected to have formed in the King et al. (2009) work.By contrast, the work of Wadia et al. (2000) reported a 50 % yield of nonanal from the oxidation of 1-oleoyl-2-palmitoyl-sn-glycero-3-phosphocholine sitting at the airwater interface in a compressed state.Interestingly, in one set of experiments from Wadia et al. (2000) using an expanded film of 1-oleoyl-2-palmitoyl-sn-glycero-3-phosphocholine, the yield was two to three times higher than in the compressed state (see Table 2 of Wadia et al., 2000), i.e. matching the n-hexanal results from this work where it is likely that the LA films will also be in the expanded state.The present work and these past studies imply that the mechanisms and product yields may be highly dependent on the molecular arrangement for the substrates, and for soluble species also on the effective collision rate with the aqueous sub-phase.
The formation of two dialdehydes, i.e.MDA and glyoxal, may arise from secondary oxidation of 3-nonenal (Fig. 6).To test this reaction mechanism, the gas-phase reaction of ozone with 3-hexenal was investigated in the 1 m 3 Teflon chamber, where 3-hexenal was used as a surrogate for 3nonenal, which is commercially unavailable.The experiment was conducted in the presence of ∼200 ppm cyclohexane, which was used to scavenge more than 95 % of the OH radicals generated in the reaction.Fig. 7a shows the time series of the reactant and product mixing ratios, where the signals at m / z 81, 59 and 73 were used to quantify 3-hexenal, propanal and MDA, respectively.After the signal at m / z 81 stabilized, the oxidation of 3hexenal was initiated by addition of ozone, leading to the production of propanal, MDA and glyoxal.The propanal and glyoxal formation yields can be determined from Fig. 7b to be 74 ± 14 % and 0.9 ± 0.1 %, respectively.From the mechanism for the reaction of ozone with 3-hexenal in Fig. 8 it is reasonable to assume the yield of MDA is 25 ± 5 %, which in turn allowed us to indirectly calibrate the m / z 73 signal to MDA.
MDA and glyoxal are formed as first-generation oxidation products from ozonolysis of 3-hexenal (Fig. 7).Possible formation mechanisms for MDA and glyoxal are shown in Fig. 8 and rationalize the rapid observed formation of MDA and glyoxal in this system.
A 1,4-H shift of the Criegee biradical can explain the rapid formation of small amounts of glyoxal.1,4-H shifts have recently been proposed (Dibble et al., 2004a, b) to explain observations of minor products in the oxidation of isoprene (Volkamer et al., 2005b;Paulot et al., 2009;Galloway et al., 2011).The yields of products from 1,4-H shifts are generally small (few percent), consistent with those in this work.
The relative yield of glyoxal / MDA is ∼1 / 20 in the Teflon chamber, while this ratio in the flow tube is 3-5 times smaller.This might be due to the soluble loss of glyoxal in the aqueous sub-phase.If the same relative yields of glyoxal / MDA are assumed from ozonolysis of 3-hexenal and 3-nonenal, the glyoxal formation yields in the flow tube experiments could be higher (Tables 1 and 2).The factor of 3-5 higher yields after correction are a lower limit for the overall losses that might be occurring in the flow reactor, since MDA concentrations have not been corrected for solubility in the aqueous sub-phase.As such, efficient uptake for glyoxal would indicate that the presence of an organic SML is not an efficient barrier to prevent glyoxal losses to the aqueous sub-phase (see also discussion in the previous section).
In addition, glyoxal may also be produced from interaction of ozone with the enol form of MDA, which has been shown to be stable, both in solution and in the gas phase (Brown et al., 1979;Trivella et al., 2008) (Fig. 8).However, based on the time series of the MDA and glyoxal in Fig. 7 the contribution of further oxidation of MDA to glyxoal formation should be of minor importance.
In one additional Type 1 experiment (see Fig. 9), we passed the effluent from the LA flow tube through a second flow tube in which was placed a 20 cm-long, 254 nm Hg Pen-Ray lamp (see Lambe et al. (2011) for a description of the flow tube).The combination of water vapor, ozone and UV radiation results in high steady-state OH concentrations, on the order of 10 9 to 10 10 molecules cm −3 , over a residence time of roughly 30 s.When the UV light source was turned on, there were losses in the levels of the n-hexanal, 3-nonenal and MDA and pronounced formation of glyoxal.This suggests that these precursors, mostly likely 3-nonenal and MDA, can be oxidized to glyoxal with OH radicals.
Gas-phase products from ozonolysis of natural SML samples
Similar gas-phase products from heterogeneous ozonolysis of two natural SML samples, SML-CA and SML-EqPOS, were observed by PTR-TOF-MS with H 3 O + as reagent ion.Fig. 10 presents the evolution of selected masses during oxidation of SML-CA with ozone.After the samples were added to the glass boat most PTR-TOF-MS signals increased quickly to a maximum and then slowly decreased as the samples degassed.While it is possible that some of these signals arise from species that were originally dissolved in the SML at the point of collection, it is more likely that these species formed after collection via some form of sample degradation.When the signals had almost levelled off, the oxidation of organic substances in the SML was initiated by turning on the ozone generator.Heterogeneous oxidation of SML samples by ozone produced two sets of products, with one set observed in both the SML-CA and the SML-EqPOS samples (Fig. 10a) and the other observed only in the SML-CA samples (Fig. 10b).Figure 10b presents the second set of products which were only observed in the ozone reaction with the coastal sample, SML-CA.Different from the first set of products, these signals increased quickly upon ozone exposure then dropped back to background levels and did not rise with a second ozone exposure.This behavior suggests they arise from lower concentration precursors that are consumed upon the initial ozone exposure.these signals also appear in the oxidation of LA (Fig. 2), where they were attributed to fragmentation of protonated and/or dehydrated n-hexanal and 3-nonenal.The second set of products, therefore, may also originate from similar monocarbonyls, although other precursors cannot be ruled out.The PTR-TOF-MS responses to several aldehydes and ketones, namely acetaldehyde, acrolein, acetone and butanone, were found to have similar calibration factors.Using an average calibration factor, the mixing ratios that arise upon ozone exposure of the gas-phase carbonyls from ozonolysis of SML are estimated.For the first set of products, they were ∼200 ppt for C2 and C3 monocarbonyls and ∼30 ppt for the C4 and C5 monocarbonyls and C3 and C4 dicarbonyls.For the second set of products, the maximum monocarbonyl mixing ratios were estimated to be 30-90 ppt.
It should be noted that no dicarbonyls were calibrated with the PTR-TOF-MS, so the application of the calibration factor obtained from monocarbonyls may induce high uncertainties in the dicarbonyl quantification.Moreover, even for the structurally similar carbonyls the calibration factors for different isomers can be significantly different (Warneke et al., 2003).Thus, the mixing ratios for the carbonyl products given above should be viewed as only initial estimates.
Conclusions and atmospheric implications
There are three main conclusions from this work.First, it has been demonstrated that volatile carbonyls, such as n-hexanal and 3-nonenal, are formed when LA exists as a monolayer on artificial seawater and is exposed to O 3 .These species are formed promptly and in high yields.With similar PUFA being common components of the SML, it is likely that exposure of the SML to ozone in the environment will lead to the release of similar volatile aldehydes.To our knowledge this is the first study of the oxidation of a PUFA existing as a monolayer on seawater.This adds to an extensive body of literature which demonstrates that such species are formed when pure unsaturated fatty acids are oxidized (Moise and Rudich, 2002;Thornberry and Abbatt, 2004;Vesna et al., 2009).As well, formation of small VOCs from heterogeneous oxidation of other types of unsaturated organics, such as ozone reaction with vinyl terminated selfassembled monolayers (Dubowski et al., 2004), squalene film (Petrick and Dubowski, 2009) and fumaric acid aerosols (Najera et al., 2010), has also been reported.The only comparable system that has been studied at the air-water interface is the unsaturated phospholipid oxidation, studied by Wadia et al. (2000), where nonanal formation was observed in high yield.
Second, we have demonstrated that highly reactive gasphase dicarbonyls, i.e. malondialdehyde and glyoxal, are formed in the PUFA reaction system, probably through secondary reactions of primary products.These mechanisms were confirmed by a chamber study on the gas-phase reaction of 3-hexenal with ozone (Fig. 7) and by further oxidation of primary products from ozonolysis of LA with OH radicals (Fig. 9).While glyoxal has been measured in the atmosphere (Sinreich et al., 2010), detection of MDA has not been reported.We note that in the biochemical literature, MDA is an important cross-linking agent that reacts with amino groups of enzymes, proteins and DNA (Tappel, 1980;Wang et al., 2009;Passagne et al., 2012).Based on the thiobarbituric acid (TBA) reactive assay (Wang et al., 2009;Passagne et al., 2012), a number of studies suggest that MDA is formed from oxidation of lipid as well as PUFA (Pryor et al., 1976;Frankel, 1984;Scislowski et al., 2005;Santos-Zago et al., 2007).At this point, we are unable to accurately quantify the yields of glyoxal and MDA given that they may be dissolving to some degree in the aqueous sub-phase.The yields reported in the paper are likely lower limits.
Our third conclusion is that ozone exposure to natural SML samples leads to the formation of a wide variety of oxygenated VOCs similar to those formed from PUFA oxidation, i.e. small mono-and dicarbonyls.While it is clear that SML materials are highly complex so that we cannot attribute these products to specific reactants, it is nevertheless important to demonstrate that the SML represents a reactive medium that may lead to VOC production via heterogeneous oxidation.This initial study warrants further investigations of SML-ozone interactions.
Fig. 2 .
Fig. 2. Mass spectra for the gas-phase products from Type 1 experiments recorded by PTR-MS (A) and PTR-TOF-MS (B).In B, the shaded area represents exposure to 500 ppb O3.
Fig. 2 .
Fig. 2. Mass spectra for the gas-phase products from Type 1 experiments recorded by PTR-MS (A) and PTR-TOF-MS (B).In (B), the shaded area represents exposure to 500 ppb O 3 .
Fig. 3 .
Fig. 3. PTR-TOF-MS mass spectrum of the signal at m/z 73.02 recorded with H3O + as the reagent ion (A) and three signals recorded with PTR-TOF-MS as the reagent ion is switched from H3O + to NO + (B) from Type 1 experiment with 500 ppb O3 exposure.
Fig. 3 .
Fig. 3. PTR-TOF-MS mass spectrum of the signal at m / z 73.02 recorded with H 3 O + as the reagent ion (A) and three signals recorded with PTR-TOF-MS as the reagent ion is switched from H 3 O + to NO + (B) from Type 1 experiment with 500 ppb O 3 exposure.
Fig. 4 .
Fig. 4. Example of ozone and product profiles for a Type 1 LA experiment.
Fig. 4 .
Fig. 4. Example of ozone and product profiles for a Type 1 LA experiment.
1376S.
Fig. 4. Example of ozone and product profiles for a Type 1 LA experiment.
Fig. 5 .
Fig. 5. Example of ozone and product profiles for a Type 2 LA experiment.
Fig. 5 .
Fig. 5. Example of ozone and product profiles for a Type 2 LA experiment.
Table 1 .a
Gas-phase aldehyde formation yields relative to the amount of ozone consumed from Type 1 experiments with LA.The uncertainties reflect the variation in results in 3 experiments, except in the case of glyoxal where the experiment was conducted once.Yield uncorrected for product dissolution in the aqueous sub-phase; significant product loss is expected, but correction factors are uncertain (see text in Sect.3.3); b Values in brackets are calculated assuming the ratio of glyoxal / MDA ∼1 / 20 found in the Teflon chamber experiments.
Fig. 6 .
Fig. 6.Reaction mechanism for the heterogeneous reaction of ozone with LA at air/water interface.
Fig. 6 .
Fig. 6.Reaction mechanism for the heterogeneous reaction of ozone with LA at the air/water interface.
Fig. 7 .
Fig. 7. (A) Time series of the reactants and products and (B) for the reaction of gas-phase 3-hexenal with ozone in the Teflon chamber.
Fig. 9 .
Fig. 9. Flow tube results when a UV lamp is turned on (in the shaded region) to form OH in a second flow tube in series downstream of the LA oxidation flow tube.
Fig. 9 .
Fig. 9. Flow tube results when a UV lamp is turned on (in the shaded region) to form OH in a second flow tube in series downstream of the LA oxidation flow tube.
Fig. 10 .
Fig. 10.Evolution of selected masses during oxidation of SML-CA with ozone.The shaded areas represent when ozone was present at 350 ppb.
A
few other signals also rose in the presence of O 3 , such as m / z 65.02 and 101.02.They match C 2 H 3 F + 2 and C 2 H 4 F 3 O + , respectively, which are most probably from the Teflon tubing used in the experiments.Meanwhile, additional strong signals, such as m / z 81.02 and 85.03, were steady in the presence and absence of O 3 .
Table 2 .
Gas-phase aldehyde formation yields relative to the amount of LA consumed from Type 2 experiments with LA.
LA consumed (nmoles)Gas-phase aldehyde yields (%) a (0.23) b Average 77.6 ± 0.9 25.7 ± 1.4 2-7 0.06 a (0.22) b a Yield uncorrected for product dissolution in the aqueous sub-phase; significant product loss is expected, but correction factors are uncertain.(see text in Sect.3.3); b Values in brackets are calculated assuming the ratio of glyoxal / MDA ∼1 / 20 found in the Teflon chamber experiments. | 2017-10-19T05:32:20.868Z | 2013-07-03T00:00:00.000 | {
"year": 2013,
"sha1": "b16d078ea80721246c0e3467acdf7237b6602988",
"oa_license": "CCBY",
"oa_url": "https://acp.copernicus.org/articles/14/1371/2014/acp-14-1371-2014.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "b16d078ea80721246c0e3467acdf7237b6602988",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
54100541 | pes2o/s2orc | v3-fos-license | One-Year Follow-Up Examination of the Impact of the North Carolina Healthy Food Small Retailer Program on Healthy Food Availability, Purchases, and Consumption
We examined the short-term impact of the North Carolina Healthy Food Small Retailer Program (HFSRP), a legislatively appropriated bill providing funding up to $25,000 to small food retailers for equipment to stock and promote healthier foods, on store-level availability and purchase of healthy foods and beverages, as well as customer dietary patterns, one year post-policy implementation. We evaluated healthy food availability using a validated audit tool, purchases using customer bag-checks, and diet using self-reported questionnaires and skin carotenoid levels, assessed via Veggie Meter™, a non-invasive tool to objectively measure fruit and vegetable consumption. Difference-in-difference analyses were used to examine changes in HFSRP stores versus control stores after 1 year. There were statistically significant improvements in healthy food supply scores (availability), with the Healthy Food Supply HFS score being −0.44 points lower in control stores and 3.13 points higher in HFSRP stores pre/post HFSRP (p = 0.04). However, there were no statistically significant changes in purchases or self-reported consumption or skin carotenoids among customers in HFSRP versus control stores. Additional time or other supports for retailers (e.g., marketing and promotional materials) may be needed for HFSRP implementation to influence purchase and consumption.
Introduction
The high prevalence of obesity in the United States (U.S.) is a major public health concern because obesity is associated with increased risk for type 2 diabetes [1,2], cardiovascular disease [3], and some cancers [4,5]. This high prevalence of obesity could be decreased if the majority of the U.S. population consumed a diet rich in fruits, vegetables, lean protein, whole grains, and healthier beverages [6][7][8][9]. Yet,
Store Selection
In 2016, six corner stores received HFSRP funding which was provided via a competitive application process. The evaluation team (the authors) did not participate in selecting the HFSRP stores which were selected for funding. We collected baseline data in February-May 2017 in five of the six stores, as the NCDA&CS HFSRP coordinator felt that one store was undergoing too many transitions for our team to collect baseline data in that store. We selected control stores matched to the HFSRP stores based on a variety of factors, including North American Industry Code Standards (NAICS) store type (small grocery or convenience store), store size, census tract food desert type, and similar demographics, including percent of the census tract on Supplemental Nutrition Assistance Program (SNAP) benefits and the percent African American residents. [29] Due to resource constraints, for our one-year follow-up, we collected data in a subset (n = 4) of the 11 original control stores and in 4 of the originally-funded HFSRP stores. Table 1 below includes matching variables for each matched pair. the HFSRP stores based on a variety of factors, including North American Industry Code Standards (NAICS) store type (small grocery or convenience store), store size, census tract food desert type, and similar demographics, including percent of the census tract on Supplemental Nutrition Assistance Program (SNAP) benefits and the percent African American residents. [29] Due to resource constraints, for our one-year follow-up, we collected data in a subset (n = 4) of the 11 original control stores and in 4 of the originally-funded HFSRP stores. Table 1 below includes matching variables for each matched pair.
HFSRP Intervention
The details of what each of the four HFSRP stores did using the HFSRP funding are described in Table 2 below.
Intervention
Store HFSRP Details on the Intervention A Ordered new equipment and a large promotional event was planned upon installation. However, equipment was not installed at the time of the first report to the NC Legislature on 1 October 2017.
B
Ordered and installed a small freezer (August-October 2017) and converted a candy rack into a produce display. The store owner prepared sliced cucumbers for grab-and-go snacks and sold all that were prepared. C Ordered equipment in August 2017 and is partnering with the local health department on promotions to highlight healthier options. D Ordered equipment in August 2017, and the owner is now able to stock produce from local farmers.
Healthy Food Availability
This was assessed using the Healthy Food Supply (HFS) score, as described in our previous work [29]. Briefly, in-store audits were conducted using a form adapted from the validated Nutrition Environment Measures Survey for Stores (NEMS-S) [30]. The adapted NEMS-S includes 10 categories including 18 foods/food types. We replicated the method of Andreyeva et al. [31] and Caspi et al. [30] to create a store-level HFS score, summarizing availability, price, quality, and variety of food and beverage items in each store. The HFS score has a possible range of 0-31, with higher scores indicating healthier items [30]. the HFSRP stores based on a variety of factors, including North American Industry Code Standards (NAICS) store type (small grocery or convenience store), store size, census tract food desert type, and similar demographics, including percent of the census tract on Supplemental Nutrition Assistance Program (SNAP) benefits and the percent African American residents. [29] Due to resource constraints, for our one-year follow-up, we collected data in a subset (n = 4) of the 11 original control stores and in 4 of the originally-funded HFSRP stores. Table 1 below includes matching variables for each matched pair.
HFSRP Intervention
The details of what each of the four HFSRP stores did using the HFSRP funding are described in Table 2 below.
Intervention
Store HFSRP Details on the Intervention A Ordered new equipment and a large promotional event was planned upon installation. However, equipment was not installed at the time of the first report to the NC Legislature on 1 October 2017.
B
Ordered and installed a small freezer (August-October 2017) and converted a candy rack into a produce display. The store owner prepared sliced cucumbers for grab-and-go snacks and sold all that were prepared. C Ordered equipment in August 2017 and is partnering with the local health department on promotions to highlight healthier options. D Ordered equipment in August 2017, and the owner is now able to stock produce from local farmers.
Healthy Food Availability
This was assessed using the Healthy Food Supply (HFS) score, as described in our previous work [29]. Briefly, in-store audits were conducted using a form adapted from the validated Nutrition Environment Measures Survey for Stores (NEMS-S) [30]. The adapted NEMS-S includes 10 categories including 18 foods/food types. We replicated the method of Andreyeva et al. [31] and Caspi et al. [30] to create a store-level HFS score, summarizing availability, price, quality, and variety of food and beverage items in each store. The HFS score has a possible range of 0-31, with higher scores indicating healthier items [30].
HFSRP Intervention
The details of what each of the four HFSRP stores did using the HFSRP funding are described in Table 2 below.
Intervention
Store HFSRP Details on the Intervention A Ordered new equipment and a large promotional event was planned upon installation. However, equipment was not installed at the time of the first report to the NC Legislature on 1 October 2017.
B
Ordered and installed a small freezer (August-October 2017) and converted a candy rack into a produce display. The store owner prepared sliced cucumbers for grab-and-go snacks and sold all that were prepared. C Ordered equipment in August 2017 and is partnering with the local health department on promotions to highlight healthier options.
D
Ordered equipment in August 2017, and the owner is now able to stock produce from local farmers.
Healthy Food Availability
This was assessed using the Healthy Food Supply (HFS) score, as described in our previous work [29]. Briefly, in-store audits were conducted using a form adapted from the validated Nutrition Environment Measures Survey for Stores (NEMS-S) [30]. The adapted NEMS-S includes 10 categories including 18 foods/food types. We replicated the method of Andreyeva et al. [31] and Caspi et al. [30] to create a store-level HFS score, summarizing availability, price, quality, and variety of food and beverage items in each store. The HFS score has a possible range of 0-31, with higher scores indicating healthier items [30]. the HFSRP stores based on a variety of factors, including North American Industry Code Standards (NAICS) store type (small grocery or convenience store), store size, census tract food desert type, and similar demographics, including percent of the census tract on Supplemental Nutrition Assistance Program (SNAP) benefits and the percent African American residents. [29] Due to resource constraints, for our one-year follow-up, we collected data in a subset (n = 4) of the 11 original control stores and in 4 of the originally-funded HFSRP stores. Table 1 below includes matching variables for each matched pair.
HFSRP Intervention
The details of what each of the four HFSRP stores did using the HFSRP funding are described in Table 2 below.
HFSRP Details on the Intervention
A Ordered new equipment and a large promotional event was planned upon installation. However, equipment was not installed at the time of the first report to the NC Legislature on 1 October 2017.
B
Ordered and installed a small freezer (August-October 2017) and converted a candy rack into a produce display. The store owner prepared sliced cucumbers for grab-and-go snacks and sold all that were prepared. C Ordered equipment in August 2017 and is partnering with the local health department on promotions to highlight healthier options.
D
Ordered equipment in August 2017, and the owner is now able to stock produce from local farmers.
Healthy Food Availability
This was assessed using the Healthy Food Supply (HFS) score, as described in our previous work [29]. Briefly, in-store audits were conducted using a form adapted from the validated Nutrition Environment Measures Survey for Stores (NEMS-S) [30]. The adapted NEMS-S includes 10 categories including 18 foods/food types. We replicated the method of Andreyeva et al. [31] and Caspi et al. [30] to create a store-level HFS score, summarizing availability, price, quality, and variety of food and beverage items in each store. The HFS score has a possible range of 0-31, with higher scores indicating healthier items [30]. the HFSRP stores based on a variety of factors, including North American Industry Code Standards (NAICS) store type (small grocery or convenience store), store size, census tract food desert type, and similar demographics, including percent of the census tract on Supplemental Nutrition Assistance Program (SNAP) benefits and the percent African American residents. [29] Due to resource constraints, for our one-year follow-up, we collected data in a subset (n = 4) of the 11 original control stores and in 4 of the originally-funded HFSRP stores. Table 1 below includes matching variables for each matched pair.
HFSRP Intervention
The details of what each of the four HFSRP stores did using the HFSRP funding are described in Table 2 below.
HFSRP Details on the Intervention
A Ordered new equipment and a large promotional event was planned upon installation. However, equipment was not installed at the time of the first report to the NC Legislature on 1 October 2017.
B
Ordered and installed a small freezer (August-October 2017) and converted a candy rack into a produce display. The store owner prepared sliced cucumbers for grab-and-go snacks and sold all that were prepared. C Ordered equipment in August 2017 and is partnering with the local health department on promotions to highlight healthier options. D Ordered equipment in August 2017, and the owner is now able to stock produce from local farmers.
Healthy Food Availability
This was assessed using the Healthy Food Supply (HFS) score, as described in our previous work [29]. Briefly, in-store audits were conducted using a form adapted from the validated Nutrition Environment Measures Survey for Stores (NEMS-S) [30]. The adapted NEMS-S includes 10 categories including 18 foods/food types. We replicated the method of Andreyeva et al. [31] and Caspi et al. [30] to create a store-level HFS score, summarizing availability, price, quality, and variety of food and beverage items in each store. The HFS score has a possible range of 0-31, with higher scores indicating healthier items [30]. the HFSRP stores based on a variety of factors, including North American Industry Code Standards (NAICS) store type (small grocery or convenience store), store size, census tract food desert type, and similar demographics, including percent of the census tract on Supplemental Nutrition Assistance Program (SNAP) benefits and the percent African American residents. [29] Due to resource constraints, for our one-year follow-up, we collected data in a subset (n = 4) of the 11 original control stores and in 4 of the originally-funded HFSRP stores. Table 1 below includes matching variables for each matched pair.
HFSRP Intervention
The details of what each of the four HFSRP stores did using the HFSRP funding are described in Table 2 below.
HFSRP Details on the Intervention
A Ordered new equipment and a large promotional event was planned upon installation. However, equipment was not installed at the time of the first report to the NC Legislature on 1 October 2017.
B
Ordered and installed a small freezer (August-October 2017) and converted a candy rack into a produce display. The store owner prepared sliced cucumbers for grab-and-go snacks and sold all that were prepared.
C Ordered equipment in August 2017 and is partnering with the local health department on promotions to highlight healthier options. D Ordered equipment in August 2017, and the owner is now able to stock produce from local farmers.
Healthy Food Availability
This was assessed using the Healthy Food Supply (HFS) score, as described in our previous work [29]. Briefly, in-store audits were conducted using a form adapted from the validated Nutrition Environment Measures Survey for Stores (NEMS-S) [30]. The adapted NEMS-S includes 10 categories including 18 foods/food types. We replicated the method of Andreyeva et al. [31] and Caspi et al. [30] to create a store-level HFS score, summarizing availability, price, quality, and variety of food and beverage items in each store. The HFS score has a possible range of 0-31, with higher scores indicating healthier items [30]. the HFSRP stores based on a variety of factors, including North American Industry Code Standards (NAICS) store type (small grocery or convenience store), store size, census tract food desert type, and similar demographics, including percent of the census tract on Supplemental Nutrition Assistance Program (SNAP) benefits and the percent African American residents. [29] Due to resource constraints, for our one-year follow-up, we collected data in a subset (n = 4) of the 11 original control stores and in 4 of the originally-funded HFSRP stores. Table 1 below includes matching variables for each matched pair.
HFSRP Intervention
The details of what each of the four HFSRP stores did using the HFSRP funding are described in Table 2 below.
HFSRP Details on the Intervention
A Ordered new equipment and a large promotional event was planned upon installation. However, equipment was not installed at the time of the first report to the NC Legislature on 1 October 2017.
B
Ordered and installed a small freezer (August-October 2017) and converted a candy rack into a produce display. The store owner prepared sliced cucumbers for grab-and-go snacks and sold all that were prepared. C Ordered equipment in August 2017 and is partnering with the local health department on promotions to highlight healthier options. D Ordered equipment in August 2017, and the owner is now able to stock produce from local farmers.
Healthy Food Availability
This was assessed using the Healthy Food Supply (HFS) score, as described in our previous work [29]. Briefly, in-store audits were conducted using a form adapted from the validated Nutrition Environment Measures Survey for Stores (NEMS-S) [30]. The adapted NEMS-S includes 10 categories including 18 foods/food types. We replicated the method of Andreyeva et al. [31] and Caspi et al. [30] to create a store-level HFS score, summarizing availability, price, quality, and variety of food and beverage items in each store. The HFS score has a possible range of 0-31, with higher scores indicating healthier items [30]. the HFSRP stores based on a variety of factors, including North American Industry Code Standards (NAICS) store type (small grocery or convenience store), store size, census tract food desert type, and similar demographics, including percent of the census tract on Supplemental Nutrition Assistance Program (SNAP) benefits and the percent African American residents. [29] Due to resource constraints, for our one-year follow-up, we collected data in a subset (n = 4) of the 11 original control stores and in 4 of the originally-funded HFSRP stores. Table 1 below includes matching variables for each matched pair.
HFSRP Intervention
The details of what each of the four HFSRP stores did using the HFSRP funding are described in Table 2 below.
HFSRP Details on the Intervention
A Ordered new equipment and a large promotional event was planned upon installation. However, equipment was not installed at the time of the first report to the NC Legislature on 1 October 2017.
B
Ordered and installed a small freezer (August-October 2017) and converted a candy rack into a produce display. The store owner prepared sliced cucumbers for grab-and-go snacks and sold all that were prepared. C Ordered equipment in August 2017 and is partnering with the local health department on promotions to highlight healthier options.
D
Ordered equipment in August 2017, and the owner is now able to stock produce from local farmers.
Healthy Food Availability
This was assessed using the Healthy Food Supply (HFS) score, as described in our previous work [29]. Briefly, in-store audits were conducted using a form adapted from the validated Nutrition Environment Measures Survey for Stores (NEMS-S) [30]. The adapted NEMS-S includes 10 categories including 18 foods/food types. We replicated the method of Andreyeva et al. [31] and Caspi et al. [30] to create a store-level HFS score, summarizing availability, price, quality, and variety of food and beverage items in each store. The HFS score has a possible range of 0-31, with higher scores indicating healthier items [30]. the HFSRP stores based on a variety of factors, including North American Industry Code Standards (NAICS) store type (small grocery or convenience store), store size, census tract food desert type, and similar demographics, including percent of the census tract on Supplemental Nutrition Assistance Program (SNAP) benefits and the percent African American residents. [29] Due to resource constraints, for our one-year follow-up, we collected data in a subset (n = 4) of the 11 original control stores and in 4 of the originally-funded HFSRP stores. Table 1 below includes matching variables for each matched pair.
HFSRP Intervention
The details of what each of the four HFSRP stores did using the HFSRP funding are described in Table 2 below.
HFSRP Details on the Intervention
A Ordered new equipment and a large promotional event was planned upon installation. However, equipment was not installed at the time of the first report to the NC Legislature on 1 October 2017.
B
Ordered and installed a small freezer (August-October 2017) and converted a candy rack into a produce display. The store owner prepared sliced cucumbers for grab-and-go snacks and sold all that were prepared. C Ordered equipment in August 2017 and is partnering with the local health department on promotions to highlight healthier options. D Ordered equipment in August 2017, and the owner is now able to stock produce from local farmers.
Healthy Food Availability
This was assessed using the Healthy Food Supply (HFS) score, as described in our previous work [29]. Briefly, in-store audits were conducted using a form adapted from the validated Nutrition Environment Measures Survey for Stores (NEMS-S) [30]. The adapted NEMS-S includes 10 categories including 18 foods/food types. We replicated the method of Andreyeva et al. [31] and Caspi et al. [30] to create a store-level HFS score, summarizing availability, price, quality, and variety of food and beverage items in each store. The HFS score has a possible range of 0-31, with higher scores indicating healthier items [30]. the HFSRP stores based on a variety of factors, including North American Industry Code Standards (NAICS) store type (small grocery or convenience store), store size, census tract food desert type, and similar demographics, including percent of the census tract on Supplemental Nutrition Assistance Program (SNAP) benefits and the percent African American residents. [29] Due to resource constraints, for our one-year follow-up, we collected data in a subset (n = 4) of the 11 original control stores and in 4 of the originally-funded HFSRP stores. Table 1 below includes matching variables for each matched pair.
HFSRP Intervention
The details of what each of the four HFSRP stores did using the HFSRP funding are described in Table 2 below.
HFSRP Details on the Intervention
A Ordered new equipment and a large promotional event was planned upon installation. However, equipment was not installed at the time of the first report to the NC Legislature on 1 October 2017.
B
Ordered and installed a small freezer (August-October 2017) and converted a candy rack into a produce display. The store owner prepared sliced cucumbers for grab-and-go snacks and sold all that were prepared. C Ordered equipment in August 2017 and is partnering with the local health department on promotions to highlight healthier options. D Ordered equipment in August 2017, and the owner is now able to stock produce from local farmers.
Healthy Food Availability
This was assessed using the Healthy Food Supply (HFS) score, as described in our previous work [29]. Briefly, in-store audits were conducted using a form adapted from the validated Nutrition Environment Measures Survey for Stores (NEMS-S) [30]. The adapted NEMS-S includes 10 categories including 18 foods/food types. We replicated the method of Andreyeva et al. [31] and Caspi et al. [30] to create a store-level HFS score, summarizing availability, price, quality, and variety of food and beverage items in each store. The HFS score has a possible range of 0-31, with higher scores indicating healthier items [30]. the HFSRP stores based on a variety of factors, including North American Industry Code Standards (NAICS) store type (small grocery or convenience store), store size, census tract food desert type, and similar demographics, including percent of the census tract on Supplemental Nutrition Assistance Program (SNAP) benefits and the percent African American residents. [29] Due to resource constraints, for our one-year follow-up, we collected data in a subset (n = 4) of the 11 original control stores and in 4 of the originally-funded HFSRP stores. Table 1 below includes matching variables for each matched pair.
HFSRP Intervention
The details of what each of the four HFSRP stores did using the HFSRP funding are described in Table 2 below.
HFSRP Details on the Intervention
A Ordered new equipment and a large promotional event was planned upon installation. However, equipment was not installed at the time of the first report to the NC Legislature on 1 October 2017.
B
Ordered and installed a small freezer (August-October 2017) and converted a candy rack into a produce display. The store owner prepared sliced cucumbers for grab-and-go snacks and sold all that were prepared. C Ordered equipment in August 2017 and is partnering with the local health department on promotions to highlight healthier options. D Ordered equipment in August 2017, and the owner is now able to stock produce from local farmers.
Healthy Food Availability
This was assessed using the Healthy Food Supply (HFS) score, as described in our previous work [29]. Briefly, in-store audits were conducted using a form adapted from the validated Nutrition Environment Measures Survey for Stores (NEMS-S) [30]. The adapted NEMS-S includes 10 categories including 18 foods/food types. We replicated the method of Andreyeva et al. [31] and Caspi et al. [30] to create a store-level HFS score, summarizing availability, price, quality, and variety of food and beverage items in each store. The HFS score has a possible range of 0-31, with higher scores indicating healthier items [30]. the HFSRP stores based on a variety of factors, including North American Industry Code Standards (NAICS) store type (small grocery or convenience store), store size, census tract food desert type, and similar demographics, including percent of the census tract on Supplemental Nutrition Assistance Program (SNAP) benefits and the percent African American residents. [29] Due to resource constraints, for our one-year follow-up, we collected data in a subset (n = 4) of the 11 original control stores and in 4 of the originally-funded HFSRP stores. Table 1 below includes matching variables for each matched pair.
HFSRP Intervention
The details of what each of the four HFSRP stores did using the HFSRP funding are described in Table 2 below.
HFSRP Details on the Intervention
A Ordered new equipment and a large promotional event was planned upon installation. However, equipment was not installed at the time of the first report to the NC Legislature on 1 October 2017.
B
Ordered and installed a small freezer (August-October 2017) and converted a candy rack into a produce display. The store owner prepared sliced cucumbers for grab-and-go snacks and sold all that were prepared. C Ordered equipment in August 2017 and is partnering with the local health department on promotions to highlight healthier options. D Ordered equipment in August 2017, and the owner is now able to stock produce from local farmers.
Healthy Food Availability
This was assessed using the Healthy Food Supply (HFS) score, as described in our previous work [29]. Briefly, in-store audits were conducted using a form adapted from the validated Nutrition Environment Measures Survey for Stores (NEMS-S) [30]. The adapted NEMS-S includes 10 categories including 18 foods/food types. We replicated the method of Andreyeva et al. [31] and Caspi et al. [30] to create a store-level HFS score, summarizing availability, price, quality, and variety of food and beverage items in each store. The HFS score has a possible range of 0-31, with higher scores indicating healthier items [30].
HFSRP Intervention
The details of what each of the four HFSRP stores did using the HFSRP funding are described in Table 2 below.
Intervention Store HFSRP Details on the Intervention
A Ordered new equipment and a large promotional event was planned upon installation. However, equipment was not installed at the time of the first report to the NC Legislature on 1 October 2017.
B
Ordered and installed a small freezer (August-October 2017) and converted a candy rack into a produce display. The store owner prepared sliced cucumbers for grab-and-go snacks and sold all that were prepared. C Ordered equipment in August 2017 and is partnering with the local health department on promotions to highlight healthier options. D Ordered equipment in August 2017, and the owner is now able to stock produce from local farmers.
Healthy Food Availability
This was assessed using the Healthy Food Supply (HFS) score, as described in our previous work [29]. Briefly, in-store audits were conducted using a form adapted from the validated Nutrition Environment Measures Survey for Stores (NEMS-S) [30]. The adapted NEMS-S includes 10 categories including 18 foods/food types. We replicated the method of Andreyeva et al. [31] and Caspi et al. [30] to create a store-level HFS score, summarizing availability, price, quality, and variety of food and beverage items in each store. The HFS score has a possible range of 0-31, with higher scores indicating healthier items [30].
Ethical Issues and Informed Consent
The East Carolina University Institutional Review Board reviewed and approved study #UMCIRB 16-002420 on 31 January 2017. We obtained a waiver of informed consent from the East Carolina University (ECU) Institutional Review Board and thus obtained verbal consent after customers were given written information about the study. Customers were provided with a $10 gift card upon survey completion.
Customer Intercept Survey
We conducted customer intercept surveys in each store both at baseline and Year 1 follow-up. Participants were recruited by interviewers after making purchases. The intercept survey consisted of 45 items which included a "bag check" (described below), frequency of shopping at the store, shopping at other small stores, availability of fresh fruits and vegetables in the neighborhood, fruit, vegetable, and sugary beverage consumption, and demographics (sex, age, race, ethnicity, highest grade of school completed, annual household income, employment), self-reported height and weight, and Veggie Meter™ assessment. Consumption of fruits, vegetables, and sugary beverages and the Veggie Meter™ assessment are described below.
Healthfulness of Food and Beverage Purchases
We calculated an aggregate, store-level Healthy Eating Index from customer "bag checks," wherein interviewers recorded product name, brand, size, quantity and price paid for each item purchased [32]. We used bag check data instead of receipt data, as many small stores do not commonly provide customers with itemized receipts. We calculated a single aggregated store-level Healthy Eating Index (HEI)-2010 score for each store, with a possible range from 0 to 100. The HEI-2010 is comprised of 12 components and scored per 1000 kcal. We used the aggregated score because some purchases involve a very small number of items and therefore make the HEI less meaningful. The NCI Automated Self-Administered 24-hour recall website (ASA24) was used to determine kilocalories, added sugars, fiber and general nutrient profile of purchases made at HFSRP and control stores. We calculated HEI-2010 scores for purchases using the SAS macros provided by NCI. The HEI-2010 is a valid indicator of whether a diet or food source is consistent with federal dietary guidelines [33,34]. The aggregate HEI was calculated by deriving each of the 12 sub-components according to published guidelines [35,36].
Customer Dietary Intake
Participants self-reported fruit and vegetable intake using a single items for both fruits and vegetables, as in Ortega, et al. [37], i.e., "On a typical day, how many servings of fruits do you eat? (A serving of fruit is like a medium sized apple or a half cup of fresh fruit-this does not include fruit juice)" with responses reported as whole numbers. Respondents also reported fruit and vegetable consumption using the NCI Fruit and Vegetable Screener [38]. The NCI screener includes frequency and amount questions for the following items: 100% juice; fruit (fresh, canned, frozen, no juice); lettuce salad; French fries/fried potatoes; other white potatoes; cooked dried beans; other vegetables; and tomato sauce. Data from the NCI screener were analyzed using the older version of the standard NCI algorithms as in our baseline paper [29]. The questionnaire also included two sugary beverage questions regarding frequency of consumption of regular soda and sweetened fruit drinks adapted from the Behavioral Risk Factor Surveillance System (BRFSS) [39].
In addition to self-reported dietary assessment, we used the validated Veggie Meter™ device [40], which operates via pressure-mediated reflection spectroscopy (RS) [41,42] to assess skin carotenoid status as a proxy for fruit and vegetable consumption. The Veggie Meter™ takes measurements in the fingers and as in our baseline data collection, at follow-up, each participant's finger was scanned 3 times and the average value was used. Body mass index (BMI, kg/m 2 ) was calculated from self-reported height and weight. We corrected for the error in self-reported height and weight using a correction factor [43].
Statistical Analysis
We compared data from customers from the four HFSRP and four control stores on demographics and dietary intake. We used t-tests or chi-squared tests to examine pre/post differences in customer characteristics for both HFSRP and control store customers. Unadjusted difference-in-difference analyses between HFSRP and control customers were conducted using two-way ANOVA or logistic regressions. Change in store-level HFS Scores and aggregate, store-level HEI scores for purchases in the eight stores were assessed using two-sample t-tests. Change in customer-level fruit, vegetable, and sugary beverage consumption, as well as skin carotenoids and BMI were assessed using adjusted difference-in-difference models (adjusted for age, race, and sex, with repeated-measures to account for clustering of customers in stores) with propensity scores used when there were significant differences in customer characteristics. Table 3 includes differences in customers pre/post HFSRP in both HFSRP and control stores. The mean age across the years was 42.5 years-44.9 years and mean BMI was 27.7-30.5 kg/m 2 .
Healthy Food Supply Scores
There was a significant difference in HFS scores in HFSRP versus control stores. The HFS score was -0.44 points lower in control stores and 3.13 points higher in HFSRP stores pre/post HFSRP (p = 0.04). Table 4 below lists the stores, the average 2017 HFS score and 2018 score, and the change in HFS score for each store. HFS scores in HFSRP Store A increased due to an increase in availability of lower fat milk, a higher price of whole versus lower-fat milk, an increase in availability of fresh fruits and vegetables, and an increase in canned vegetable availability. HFS scores in Store B increased due to increases in frozen fruits and vegetables, canned fruit, and increases in availability of brown rice and whole grain cereals.
HFS scores in Store C increased due to a higher ratio of low-fat relative to whole milk, more varieties of frozen vegetables and canned fruit. Finally, HFS scores in Store D increased due to increased varieties of fresh vegetables, and more whole grain bread and tortillas.
Healthy Eating Index Scores
The Table 5 below shows the HEI scores of customer purchases, aggregated at the store level for the HFSRP and control stores, as well as the change in HEI Scores. The mean difference in HEI from baseline to Year 1 for control stores was 0.08, and the mean change for HFSRP stores was 1.19, with a mean difference of 1.11, and the p-value for this difference was 0.83. Contrary to our hypotheses, the HEI decreased from baseline to Year 1 in three of the four HFSRP stores, indicating customers were making less healthy purchases post-HFSRP intervention, though these differences were not statistically significant. There were no significant differences in fruit and vegetable intake, sugary beverage intake, or in skin carotenoids and BMI among customers from HFSRP versus control stores. (See Table 6 below) These results remained similar when income was included as a covariate, so to improve model parsimony, we left income out of the model.
Discussion
The HFSRP has the potential to positively improve access to healthy foods and beverages, and thereby, purchases and dietary intake, which could help diminish health disparities among residents of NC food deserts. Despite this potential, for three consecutive years, the HFSRP has lacked the votes to become more permanent in the form of a law (it has only been approved in appropriations bills which are less permanent forms of law and must be passed annually). If there are positive impacts of the HFSRP on dietary outcomes and rural economies, the NC Legislature would have more evidence to pass the bill into a law, versus an annual appropriations bill.
In our current one-year follow-up study, we found that the supply of healthy foods increased in HFSRP versus control stores, demonstrating that the HFSRP is having a positive impact on foods offered in small stores in NC food deserts. We found that HFS scores increased due to increased lower-fat milk offerings, increase in the varieties of fresh, frozen and canned fruits and vegetables, and more whole grain products. Thus, the HFSRP is reaching the goal of providing healthier options to residents who might otherwise not have such options [15][16][17]. However, we did not find any differences in purchases or either self-reported or objectively-measured dietary behaviors among HFSRP versus control store customers. While we did see positive effects on the healthy food supply within HFSRP stores, it could take longer for positive impacts on purchases and diet to occur among customers. Over time, the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) policy changes in 2009 have resulted in positive changes to the food environment and dietary behaviors among WIC participants [44][45][46][47][48]. The positive changes in the food environment in WIC vendors, in addition to positive changes in dietary behaviors among WIC participants make it reasonable to expect that the HFSRP, which was found to have positive impacts on the availability of healthy foods and beverages, could have similar positive impacts over time.
Caspi et al. [32] found a mean HEI-2010 score across stores in Minneapolis and St. Paul, Minnesota of 36.5. The mean HEI-2010 in NC HFSRP and control stores ranged from 23.5 to 44.7, and HEI-2010 for purchases increased more in the HFSRP stores than in control stores, yet this increase was not statistically significant. Although we did observe negative differences in HEI of purchases in three intervention stores, the change in HEI was not statistically significant. Overall the magnitude of these non-significant changes was also small (HEI changes ranging from 2.22 to 6.76 on a scale of 0-100). It will be interesting to see how the HEI of purchases changes in the coming years, as more HFSRP stores are funded.
In this one-year follow-up study, we did not find any associations between whether a retailer received HFSRP funds and purchased/installed equipment, and customers' dietary intake. Thorndike et al. [49] conducted a pilot study among three intervention and three control corner stores, to examine if increasing the visibility and quality of fresh produce in corner stores would result in increased fruit and vegetable purchases, and found that this increase in purchases did occur when examining WIC fruit and vegetable voucher redemption (p = 0.036) yet the increase was not statistically significant when comparing self-reported fruit and vegetable purchases from baseline and intervention-period exit interview responses among WIC customers. Based on their results, Thorndike et al. [49] conclude that policies that incentivize stores to stock and display high quality produce could promote healthier food choices, again supporting the need for policies such as the NC HFSRP.
The current study was limited by the small sample size of stores, which may limit ability to detect significant effects. Also, due to resource constraints, we conducted the audit and calculated the HFS in one HFSRP store in August 2018 (versus Spring 2018). Thus, the mean for the HFS score among HFSRP stores may be inflated due to increased time for implementation. Another limitation is that the stores were not randomly assigned to HFSRP or control conditions. However, we matched the stores based on store-level factors such as store type and size. Study strengths include objective measures of healthy food and beverage availability, purchases, and diet (fruit and vegetable intake). The study was also set in an underserved and understudied area: rural NC, where interventions of this type are greatly needed.
Conclusions
There were statistically significant improvements in healthy food supply scores, but there were no statistically significant changes in HEI of purchases or self-reported consumption, skin carotenoids, or BMI among customers in HFSRP versus control stores. It may be that more time or more intensive education and marketing are needed before positive impacts on customer purchase and consumption are evidenced. In the future, it would be beneficial to collect qualitative data to learn more about the successfulness of strategies each HFSRP store tried, and to possibly link these data with changes in customer purchases and consumption. Funding: This research was funded by the Brody Brothers Endowment, East Carolina University. We are grateful to the Guest Editors for waiving the APC. | 2018-12-02T19:39:54.790Z | 2018-11-28T00:00:00.000 | {
"year": 2018,
"sha1": "178e76dfda3bd738ade7f0a784531d433132bf6e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/15/12/2681/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "178e76dfda3bd738ade7f0a784531d433132bf6e",
"s2fieldsofstudy": [
"Business",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Business"
]
} |
257453100 | pes2o/s2orc | v3-fos-license | The U-Shape Relationship between Triglyceride-Glucose Index and the Risk of Diabetic Retinopathy among the US Population
Objective: To explore the association of diabetic retinopathy (DR) with TyG index and TyG-related parameters among the United States population. Methods: This cross-sectional study is conducted in adults with diabetes mellitus based on the National Health and Nutrition Examination Survey (NHANES) from 2005 to 2018. Multivariate logistic regression, restricted cubic spline, trend test, receiver operating characteristic curve and subgroup analysis are adopted to uncover the association of DR with TyG index and TyG-related parameter levels in diabetics. Results: An aggregate of 888 eligible participants with diabetes is included, involving 263 (29.6%) patients with DR. The participants are stratified according to the quartile of TyG index and TyG-related parameters (Q1–Q4). Following the adjustments of the confounding factors, a multivariate logistic regression analysis finds that TyG-BMI, TyG index and Q4-TyG index are significant risk factors for DR. The restricted cubic spline shows that TyG index and the DR risk of diabetes patients are proved to be U-shaped related (p for nonlinearity = 0.001). Conclusions: The triglyceride-glucose index has a U-shaped correlation with the risk of diabetic retinopathy, which has potential predictive value.
Introduction
The prevalence of diabetes is increasing worldwide due to rapid population aging and unhealthy lifestyles characterized by smoking, excessive drinking, sedentary behavior and high-calorie diet intake. Diabetes retinopathy (DR) is a pervasive microvascular complication of diabetes that often leads to blindness, with a global prevalence of 34.6% [1]. The pathological changes of DR are often concealed, leading to delayed medical intervention, and advanced stages of the disease, which results in irreversibly impaired vision and unfavorable treatment prognosis. Consequently, early prediction, diagnosis and treatment of DR hold significant clinical importance.
Currently, the pathogenesis of DR remains insufficiently understood, and hyperglycemia is typically regarded as the primary cause of DR. In the past, the prevention and treatment of DR focused mainly on managing blood glucose levels and glycosylated hemoglobin levels in diabetic patients. Nevertheless, the pathogenesis of DR is a multifaceted process, and several mechanisms and factors contribute to its occurrence and progression, such as hypertension, abnormal lipid metabolism, inflammation and insulin resistance [2][3][4].
At present, the hyperinsulinemic normoglycemic clamp (HIEC) is considered the gold standard for assessing insulin resistance [5], as it measures peripheral tissue sensitivity to insulin. Nonetheless, due to its complexity and cost, this technique is not widely used in clinical practice. Alternatively, the homeostasis model assessment of insulin resistance (HOMA-IR) is the most commonly used method for assessing insulin resistance in clinical 2 of 11 practice and has a good correlation with HIEC [6]. However, fasting insulin needs to be measured when calculating HOMA-IR, which can be difficult to obtain in some primary medical institutions. In 2008, it was first reported that the triglyceride-glucose (TyG) index could be used as a substitute index for insulin resistance [7], which does not rely on fasting insulin levels. Compared to other insulin resistance evaluation indexes, the advantages of the TyG index lie in its lower cost, simpler operation and wider applicability.
In recent years, numerous studies have corroborated the relationship between the TyG index and the risk of IR-related metabolic diseases such as diabetes [8], nonalcoholic fatty liver disease (NAFLD) [9], cardiovascular disease [10] and metabolic syndrome [11]. What is more, obesity is commonly acknowledged to trigger or worsen the presence of insulin resistance [12]. Several studies have evaluated that TyG-related parameters are more effective than the isolated TyG index [13], such as TyG combined with body mass index (TyG-BMI), TyG combined with waist circumference (TyG-WC) and TyG combined with waist-height ratio (TyG-WHtR). However, the effect of applying the TyG index and TyG-related parameters to predict the risk of diabetic retinopathy in diabetes patients is still unclear. Therefore, this study aims to determine the predictive value of the TyG index and TyG-related parameters for the DR among the US population with diabetes.
Data Source
The National Health and Nutrition Examination Survey (NHANES) is a nationwide cross-sectional study aimed at assessing the health and nutrition status of the general population of the United States, using a complex sampling strategy. National Center for Health Statistics granted the study procedures of the Ethics Review Board (Protocol #2005-06, #2011-17, #2018-01). Informed consent of the participants was obtained before collecting any data. Centers for Disease Control and Prevention (CDC) provided health statistics and details of the NHANES protocol [14]. All participants were required to take part in standardized home interviews which obtained demographic and health related issues, meanwhile comprehensive physical and laboratory examinations were carried out at the mobile examination center (MEC). In our study, we used seven cycles of the open NHANES database (2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016)(2017)(2018). Figure 1 depicts the selection process. Missing data and subjects younger than 20 years of age or pregnant or without diabetes, are excluded. For more information on the data, please visit www.cdc.gov/nchs/nhanes/ (accessed on 29 October 2022).
Data Source
During the home interview, in the face of the question "have you ever been told by a doctor or health professional that you have diabetes or sugar diabetes?", the participants
Data Source
During the home interview, in the face of the question "have you ever been told by a doctor or health professional that you have diabetes or sugar diabetes?", the participants who answer "yes" were defined as diabetes patients. Digital images of the retina, obtained using Topcon non-mydriatic fundus photography (TRC-NW6S, Topcon, Tokyo, Japan) in the 2005-2006 and 2007-2008 cycles, were sent to contract graders of the University of Wisconsin-Madison for reading. DR participants included those diagnosed by retina image and self-reported DR individuals.
Study Variables
Information on age, gender, race/ethnicity, education, history of comorbidities (coronary heart disease (CHD), stroke, hypertension and nephropathy) and ratio of family income to poverty (PIR) was collected through demographic questionnaires in family interviews. The height, waist circumference (WC) and weight of all participants were collected by trained health technicians at the mobile examination center (MEC). Body mass index (BMI) was calculated by the following formula: BMI = body weight (kg)/height (m2) [15]. Participants' had fasting venous blood drawn after at least an 8-h overnight fast, and the measurements, including high-density lipoprotein (HDL, mg/dL), low-density lipoprotein (LDL, mg/dL), total cholesterol (TC, mg/dL), triglyceride (mg/dL), fasting glucose (mg/dL), fasting insulin (uU/mL) and glycohemoglobin (HbA1c, %) were obtained.
Statistical Analysis
A χ2 test and independent sample t-test were used to compare the differences in the characteristics between categorical variables and continuous variables at baseline in the non-DR group and DR group, respectively. Continuous variables are represented as mean ±standard deviation, and the categorical variables are shown as frequencies. A logistic regression analysis was carried out to evaluate the relationship between the risk of DR and TyG-related parameters, and to calculate the odds ratio (OR) and 95% confidence interval (CI), which showed the outcomes of several models modifying confounding factors. Among them, the crude model did not include any adjustment for covariates, model 1 adjusted the general demographic variables, and model 2 added HDL, LDL, TC, hypertension history and retinopathy history on the basis of model 1. In addition, the tendency test was conducted with the first quartile as a reference. Restricted cubic splines (RCSs) were used to identify nonlinear relationships. The diagnostic efficacy of the TyG index and its related parameters for DR were analyzed and drawn using the receiver operating characteristic (ROC) curve, evaluating the screening value of each method by the area under the ROC curve (AUC). A hierarchical logistic regression model carried out an exploratory hierarchical analysis on some subgroups and determined whether interactions occurred. p < 0.05 (bilateral) was considered to have statistical significance. All analyses were conducted through R language 4.2.2 and SPSS 22.0.
Baseline Characteristics of the Participants
The study samples include 888 adults, 443 females and 445 males. The average age is 62.2 ± 12.1 years. Two-hundred and sixty-three (29.6%) patients have DR. Among the diabetes patients, only 13.7% have a normal BMI, and 85.6% are overweight or obese. Table 1 shows the comparison between non-DR and DR adults. The result shows that the glucose, HbA1c and TyG index of the DR participants increase. Meanwhile, it suggests participants with retinopathy, CHD and stroke history are more likely to have DR. Age, race, education, PIR, LDL, HDL, triglyceride, total cholesterol, insulin, waist circumference, body mass index and hypertension history show no significant differences between diabetic patients with/without DR.
Logistic Regression Analyses for the Relationship between Various TyG-Related Parameters and DR in Different Models
The logistic regression model depicts the relationship between the various TyG-related parameters and DR, as shown in Based on our exploration, there are dose-response relationships between the quartiles of the TyG index which takes the first quartile as a reference and the risk of DR (p for trend = 0.004). This trend remains significant even after further modification of the confounding factors (HDL, LDL, TC, hypertension and retinopathy) in model 2 (p for trend = 0.002). Upon modifying potential confounding variables (model 2), the ORs of DR are 1.182 (95% CI 0.756-1.848), 1.327 (95% CI 0.831-2.121) and 2.186 (95% CI 1.323-3.613) for the second, third and fourth TyG index quartile, respectively. TyG-BMI becomes an unignorable risk factor for DR (OR 1.014, 95% CI 1.001-1.027, p = 0.035), TyG index (p = 0.002) and Q4-TyG index (p = 0.002) remain critical risk factors for DR.
Restricted Cubic Splines for the Relationship between the TyG Index and DR
An approximately U-shaped association between the TyG index and DR, demonstrated and modeled by the restricted cubic splines with four knots, is displayed among diabetes participants, which suggests that the TyG index is non-linearly associated with DR participants (Figure 2). In the crude model (Figure 2a), when the TyG index is greater than 9.21, the risk of DR increases (p for non-linearity = 0.001). Following further adjustments of confounding factors (Figure 2b), this diagram demonstrates a reduction of the risk of DR when the TyG index is beneath 9.18, then it increases afterward (p for non-linearity = 0.001).
Restricted Cubic Splines for the Relationship between the TyG Index and DR
An approximately U-shaped association between the TyG index and DR, demonstrated and modeled by the restricted cubic splines with four knots, is displayed among diabetes participants, which suggests that the TyG index is non-linearly associated with DR participants (Figure 2). In the crude model (Figure 2a), when the TyG index is greater than 9.21, the risk of DR increases (p for non-linearity = 0.001). Following further adjustments of confounding factors (Figure 2b), this diagram demonstrates a reduction of the risk of DR when the TyG index is beneath 9.18, then it increases afterward (p for nonlinearity = 0.001).
Subgroup Analysis of the Correlation between the TyG Index and DR
To verify the impact of the TyG index on DR, the study examined the interaction terms of effective variables that may lead to changes in DR risk. A subgroup analysis was conducted according to demographic factors, laboratory examination, history of hypertension (yes or no) and history of kidney disease (yes or no). Table 3 shows the results of a subgroup analysis of the correlation between the TyG index and the risk of DR, there is no difference in the TyG index among most pre-specified subgroups in DR participants, except for gender (p for interaction = 0.013), total cholesterol (p for interaction = 0.013) and retinopathy history (p for interaction = 0.032). The TyG index has a significant interaction relationship with DR in the female group (OR 2.669, 95% CI 1.395-5.109), high total cholesterol group (OR 1.004, 95% CI 1.001-1.006) and retinopathy history group (OR 2.096, 95% CI 1.328-3.307) after adjusting for the confounding variables.
Subgroup Analysis of the Correlation between the TyG Index and DR
To verify the impact of the TyG index on DR, the study examined the interaction terms of effective variables that may lead to changes in DR risk. A subgroup analysis was conducted according to demographic factors, laboratory examination, history of hypertension (yes or no) and history of kidney disease (yes or no). Table 3 shows the results of a subgroup analysis of the correlation between the TyG index and the risk of DR, there is no difference in the TyG index among most pre-specified subgroups in DR participants, except for gender (p for interaction = 0.013), total cholesterol (p for interaction = 0.013) and retinopathy history (p for interaction = 0.032). The TyG index has a significant interaction relationship with DR in the female group (OR 2.669, 95% CI 1.395-5.109), high total cholesterol group (OR 1.004, 95% CI 1.001-1.006) and retinopathy history group (OR 2.096, 95% CI 1.328-3.307) after adjusting for the confounding variables. In addition, according to the history of coronary heart disease (CHD) and stroke, diabetic patients are classified into vasculopathy(n = 171) and non-vasculopathy groups (n = 717). Table 4 shows the results of the vasculopathy subgroup analysis of the correlation between the TyG-related parameters and the risk of DR after adjusting for the confounding factors in model 2. The TyG index is a risk factor for a DR event in participants without vasculopathy (OR:2.656, 95% CI: 1.643-4.294, p < 0.01).
Diagnostic Efficacy of Various Parameters for DR
Using a receiver operating characteristic (ROC) curve to analyze the diagnostic efficacy of the TyG index, TyG-WC, TyG-BMI, TyG-WHtR and HOMA-IR for DR (Figure 3). The optimum cut-off value of the TyG index for DR diagnosing is 9.86 (AUC 0.543, sensitivity = 23.2%, specificity = 86.56%). In addition, the study also calculates the best cut-off value of TyG-WC as 961.0 (AUC = 0.517, sensitivity = 69.6%, specificity = 37.3%). Moreover, the sensitivity, specificity, AUC and the best cut-off value of TyG-BMI to diagnose DR are 81.8%, 21.8%, 0.494 and 247.3, respectively. The sensitivity, specificity, AUC and the best cut-off value of TyG-WHtR are 93.16%, 11.04%, 0.504 and 5.05. While the sensitivity, specificity, AUC and the best cut-off value of HOMA-IR are 5.7%, 94.9%, 0.454 and 41.04, respectively. An AUC greater than 0.5 is considered to have diagnostic applications, these results show that the diagnostic value of the TyG Index for DR patients is higher than that of TyG-WC, TyG-BMI, TyG-WHtR and HOMA-IR. value of TyG-WC as 961.0 (AUC = 0.517, sensitivity = 69.6%, specificity = 37.3%). Moreover, the sensitivity, specificity, AUC and the best cut-off value of TyG-BMI to diagnose DR are 81.8%, 21.8%, 0.494 and 247.3, respectively. The sensitivity, specificity, AUC and the best cut-off value of TyG-WHtR are 93.16%, 11.04%, 0.504 and 5.05. While the sensitivity, specificity, AUC and the best cut-off value of HOMA-IR are 5.7%, 94.9%, 0.454 and 41.04, respectively. An AUC greater than 0.5 is considered to have diagnostic applications, these results show that the diagnostic value of the TyG Index for DR patients is higher than that of TyG-WC, TyG-BMI, TyG-WHtR and HOMA-IR.
Discussion
This research further explores the relation and application of the TyG index and its related parameter in DR patients, using the NHANES database based on the national representative population distributed throughout the United States. This study finds that the
Discussion
This research further explores the relation and application of the TyG index and its related parameter in DR patients, using the NHANES database based on the national representative population distributed throughout the United States. This study finds that the TyG-BMI, TyG index and Q4-TyG index are significant risk factors for DR. Additionally, the TyG index exhibits a significant dose-response relationship with DR risk. Notably, this study is the first to demonstrate a U-shaped relationship between the TyG index and DR risk after adjusting for confounding factors, and the risk of DR bottoms out when the TyG index is approximately 9.18. These findings suggest that the TyG index is a robust indicator of DR risk and can facilitate the identification and monitoring of diabetic patients who are at risk for DR.
Research has explored the relation between the TyG index and DR. A nested casecontrol study carried out by Yao et al. on Chinese T2DM inpatients shows that the TyG index is highly correlated with severe levels of DR [18]. However, another study found that the rise of the TyG index was intimately related to microalbuminuria and the risk of cerebrovascular disease, which is irrelative to DR [19]. Diverse research designs, sample sizes and statistical methods may account for the discrepancies in research findings. Notably, the aforementioned studies primarily focus on inpatients, thus necessitating further validation from the community population.
The TyG index has been reported to be a composite biochemical indicator that reflects the combined effect of glucose and lipids [20]. The relationship between DR and abnormal glycolipid metabolism indicators has been extensively discussed [21,22]. DR is a progressive eye disease that poses a threat to vision. Hyperglycemia damages the retinal microvascular system, leading to diabetic macular edema (DME), neovascularization, tractive retinal detachment, vitreous hemorrhage and ultimately blindness [23]. In spite of hyperglycemia as the core of diabetic retinopathy development, abnormal lipid metabolism exacerbates the condition [24]. Montgomery et al. observed that a severe decrease in b/a wave ratio and retinal function is considered to be caused by dyslipidemia and its related lipid oxidation and increased oxidative stress [25]. Therefore, lipid abnormalities and oxidative stress likely aggravate the damage caused by diabetes to the retina. However, experiments conducted by Acharya et al. on rats revealed that hyperglycemia and aging exacerbate inflammation and oxidative stress induced by dyslipidemia [26]. The research indicates an intertwined pathogenesis between abnormal glucose and lipid metabolism.
The TyG index is used as a substitute measurement for evaluating insulin resistance [27]. Insulin resistance is mainly manifested by decreased insulin sensitivity, which is prevalent in a variety of metabolic-related diseases. Insulin resistance runs throughout diabetes, and many studies have demonstrated that chronic low-level inflammation due to obesity promotes the occurrence and development of diabetic complications by aggravating insulin resistance [28]. Obesity is one of the established risk factors for diabetes mellitus [29], and it is characterized by abnormal or excessive fat accumulation. In this study, 85.6% of diabetic patients are overweight or obese (BMI ≥25 kg/m 2 ) [30]. Previous studies show that hypertrophy or an increased number of adipose cells results in the enhanced or weakened expression of its secreted hormones and adipokines, which affects the effects of insulin from different levels, and further induces or exacerbates the presence of insulin resistance [31,32]. These possibly explain the mechanism of the TyG index related to diabetes retinopathy, but the specific mechanism still needs further study.
Multiple studies have demonstrated that combining the TyG index with obesityrelated indicators, such as waist circumference, BMI, and WHtR, enhances the ability to predict insulin resistance. A large-scale cross-sectional study concluded that TyG-BMI shows the best discriminative power for assessing insulin resistance in clinical settings [33]. Taiwo et al. concluded that TyG-WHtR is a superior predictor of metabolic syndrome risk in Nigerians compared to the TyG index and other TyG-related parameters [34]. In contrast, another study indicated that the TyG index is the better predictor of coronary heart disease risk and coronary atherosclerosis severity in NAFLD patients compared to the TyG-BMI [17]. This study found that the TyG-BMI and TyG index are important risk factors for DR after correcting for related confounders.
Our study showcases the initial evidence of a U-shaped nonlinear relationship between the TyG index and DR. Furthermore, even after controlling for confounding factors, a significant correlation between the TyG index and DR persists. This discovery will be of great help to clinicians, as it suggests that the TyG index could potentially serve as a straightforward, reliable and practical measure in the treatment and management of DR. Our subgroup analysis also indicates that IR-related diabetic retinopathy may affect female diabetic patients more acutely. However, this does not necessarily imply a higher prevalence of female patients. Therefore, clinicians should pay more attention to insulin resistance in female diabetic patients during clinical practice, while also taking into account the blood lipids and kidney status. Moreover, the vasculopathy subgroup analysis highlights the TyG index as a critical risk factor in diabetic patients without vasculopathy. As the TyG index could detect retinal damage earlier than cardiovascular and cerebrovascular damage in diabetic patients, monitoring this index during the disease could help reduce the risk of incident DR and resultant healthcare burdens. Nonetheless, the complexity of the disease and the presence of numerous combined risk factors in diabetic patients with vasculopathy may weaken the correlation of the TyG index. Future studies should thus aim to determine the safety threshold of the TyG index to guide medication and treatment in diabetic individuals, thereby delaying the onset and progression of diabetic retinopathy.
This study also has some limitations. (1) This is a cross-sectional study. Therefore, it can only illustrate a correlation between DR and the TyG index, but it requires further prospective research to clarify the causal relationship. (2) The lack of DR severity and other residual confounding factors that are difficult to measure or evaluate probably affect our conclusions [35]. However, these limitations might be balanced by our strengths, including the large sample size, diverse ethnicities in the United States, wide age range, precise data and information on covariates, etc. | 2023-03-12T15:14:07.123Z | 2023-03-01T00:00:00.000 | {
"year": 2023,
"sha1": "2a80393f1d6fcd38b2e88264e4e970f13c3512dc",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4426/13/3/495/pdf?version=1678427981",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "731c0b5cea4710571bb956bce61e2ff4edaedd9a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247157452 | pes2o/s2orc | v3-fos-license | Radiation exposure elicits a neutrophil-driven response in healthy lung tissue that enhances metastatic colonization
Radiotherapy is one of the most effective approaches to achieve tumour control in cancer patients, although healthy tissue injury due to off-target radiation exposure can occur. In this study, we used a model of acute radiation injury to the lung in the context of cancer metastasis, to understand the biological link between tissue damage and cancer progression. We exposed healthy mouse lung tissue to radiation prior to the induction of metastasis and observed a strong enhancement of cancer cell growth. We found that locally activated neutrophils were key drivers of the tumour-supportive preconditioning of the lung microenvironment, governed by enhanced regenerative Notch signalling. Importantly, these tissue perturbations endowed arriving cancer cells with an augmented stemness phenotype. By preventing neutrophil-dependent Notch activation, via blocking degranulation, we were able to significantly offset the radiation-enhanced metastases. This work highlights a pro-tumorigenic activity of neutrophils, which is likely linked to their tissue regenerative functions.
Introduction
Radiotherapy (RT) is received by approximately 60% of all cancer patients, and remains one of the most successful non-surgical techniques to achieve local tumour control 1 . In recent decades, technological advances and increased sophistication in imaging have significantly improved the accuracy, efficacy and tolerability of RT. However, radiationinduced damage in non-target tissues can still occur, inducing a local injury 1 . A previous study from our laboratory identified the presence of a tissue regeneration program within the metastatic environment of pulmonary metastases from breast cancer 2 , generating the intriguing hypothesis that an injury could predispose the tissue for metastatic growth. In this study, we aimed to test whether acute lung injury triggered by radiation would set the stage for a perturbed tissue interaction with arriving cancer cells and foster metastasis. Both clinical and experimental studies have reported poor prognosis and increased metastasis associated with tumours occurring within a pre-irradiated site 3,4 . This has been attributed to both the release of pro-migratory factors from the irradiated local tissue 5 , and increased myeloid cell mobilisation induced by the primary tumour 4 . These studies, however, have not addressed whether an injury to healthy tissues induced by off-target radiation impacts subsequent metastatic growth within that organ. A 1973 study reported that radiation exposure influenced lung integrity and the seeding of metastatic cells 6 , while an earlier clinical study indeed suggested that lung metastases are increased after post-operative RT for breast cancer 7 . Here, analysis of women with clinically and pathologically comparable breast tumours revealed a significantly higher incidence of ipsilateral pulmonary metastasis (occurring on the same side as the breast cancer) in women who received post-operative RT compared to surgery alone. Of course the outdated RT technologies used in this study would have generated a large off-target radiation volume compared to the much more precise modern image-guided platforms 7 . Nonetheless, the biology behind the metastatic risk increase is not fully understood, and its uncovering may reveal important mechanistic connections between the processes of tissue injury and cancer.
In the present study, we set out to elucidate the injury response of healthy lung tissue to radiation exposure, and we report that radiation can generate a profoundly pro-metastatic microenvironment. Importantly, we found that lung neutrophils are key for this tumour promotion effect. Although neutrophils are vital for the host defence against pathogens, their infiltration and overzealous activation during acute inflammatory responses can exacerbate tissue damage 8 . However, an important role in the repair and regeneration of damaged epithelium in certain contexts is also emerging [9][10][11] . In cancer, neutrophils are well known to contribute to many aspects of tumour progression and metastasis and a plethora of pro-tumorigenic activities have been defined 12 . Here we describe an unexpected activity of neutrophils in the irradiated tissue, which, by influencing the responses of lung resident cells, bridges their tissue-repair functions with their tumour-supportive activity.
Mechanistically, we identified Notch as the mediator of the neutrophil-dependent lung epithelial response to radiation injury, which is central to this tumour-supportive function. This surprising phenomenon of neutrophil tissue-priming following radiation exposure may have implications for the tissue response to radiation in patients, and warrants clinical studies to investigate neutrophil-driven perturbations in cancer patients undergoing RT.
Radiation exposure in healthy lung tissue boosts metastasis
To examine the effects of off-target radiation exposure on healthy lung tissue, we delivered a single 13 Gy dose of focussed radiation specifically to the thoracic cavity of anaesthetised female BALB/c mice. We focused on the short-term tissue response to this acute lung injury, which in the long-term typically results in pneumonitis or fibrosis 1 . Mice were allowed to recover for four days, before being orthotopically injected with cancer cells into the mammary fat pad to induce primary tumours and test the onset of spontaneous metastasis (Fig. 1a). We first induced non-metastatic 4T07 primary tumours, where cancer cells can disseminate but are unable to initiate metastatic growth. Strikingly, we found an abundance of metastases within the lungs of mice previously irradiated, while only rare metastatic foci were detected in control lungs (sham-irradiated) (Fig. 1b,c). Similarly, when lungs were harvested from irradiated mice harbouring metastatic 4T1 tumours, we observed, at an early stage of tumour progression, a significant increase in metastatic burden compared to control mice while the primary tumour size was not affected (Extended Data Fig. 1a-c, Extended Data Fig. 2). We next confirmed our findings in an experimental metastasis model, in which metastases are induced via a single tail intravenous (IV) injection to drive cancer cell seeding in the lungs of mice which have received targeted lung irradiation 7 days earlier (Fig. 1d). Consistently, a profound increase in metastasis was observed following irradiation in FVB mice who received an IV injection of primary cancer cells isolated from tumours developed in mammary tumour virus-promoter middle tumour-antigen (MMTV-PyMT) mice crossed with actin-GFP transgenic mice (Extended Data Fig. 1d,e). This prometastatic effect of radiation was not unique for breast cancer cells, as a strong induction in metastasis was also observed in immune-compromised mice that were inoculated with the human oesophageal adenocarcinoma cell line Flo-1 following lung irradiation (Fig. 1e,f). Moreover, when injecting human non-small-cell lung cancer cell lines A549 and H460, we could only detect metastatic foci in pre-irradiated mice and not control animals (Extended Data Fig. 1f). Thus, radiation pre-exposure of healthy lung tissue strongly supports the growth of cancer cells across multiple mouse genetic backgrounds and induces an aggressive metastatic phenotype in poorly metastatic cells. Since cancer patients are typically treated with fractionated RT, whereby low doses of radiation are delivered over consecutive days, we employed fractionated dosing in our experimental setting (Fig. 1g). As with single-dose RT, a profound enhancement of 4T1 cancer cell metastasis was observed in BALB/c mice that received doses of 3 x 4 Gy compared to control mice (Fig. 1h,i), with the effect more pronounced with an additional 4 Gy fraction. We calculated the Biological Effective Dose (BED) in our different experimental settings, which measures the true biological dose delivered to a given tissue by a particular combination of dose-per-fraction and total dose, characterised by a specific α/β ratio 13 . We used an α/β of 10 in BED calculations for this early radiation-induced reaction 14 . Notably, the lowest fractionated regime (3x 4 Gy) delivered almost half the BED of a single high-dose 13 Gy treatment (BED 16.8 versus 29.9 Gy), yet still led to a strong boost in metastatic proficiency of the lung. Next, we used image-guided, focussed radiation to specifically target the right lung lobe of BALB/c mice, whilst leaving the left lung, heart and trachea largely unexposed (Fig. 1j). Given that the right is the larger side of the murine lung, more metastases are found on this side in both control and irradiated animals. Therefore, we measured if irradiation of the right side further increased the right/left ratio of metastases (Fig. 1j). Importantly, we observed a significant enrichment of 4T1 metastatic breast tumours in the lobe pre-exposed to either a medium (8 Gy) or high dose of radiation (12 Gy), when compared to the non-irradiated control mice (Fig. 1k). Of note, no changes in metastasis were observed in the non-irradiated left side of the lung between the experimental groups (Extended Data Fig. 1g), excluding systemic priming from irradiated to non-irradiated tissue in this setting. Thus, smaller irradiation volumes exert a similar effect to whole thoracic radiation, and the influence of radiation-induced cardiac damage on metastatic growth in our system can be excluded.
Radiation induces neutrophil infiltration and activation
To probe how the tissue alterations induced by lung irradiation fuelled subsequent metastatic growth, we analysed lung tissue from BALB/c mice at day 7 following a single dose of 13 Gy irradiation, in the absence of cancer cell seeding. At this early time point, no overt alterations in the lungs were observed histologically (Fig. 2a), although, as expected, we detected radiation-induced DNA damage and senescence (Extended Data Fig 3a-d).
When analysing immune infiltration in the lung at this time point, we observed a marked neutrophil infiltration compared to other immune subsets in the irradiated lungs (Fig. 2b,c). No significant alterations were detected within the adaptive immune compartment upon irradiation (Fig. 2c). This neutrophil increase was also reflected in the abundance of intra-tumoural neutrophil infiltration in the 4T1 metastatic lesions subsequently growing in pre-irradiated lungs compared to controls (Extended Data Fig. 3e). The pro-metastatic effect of radiation was maintained for murine cancer cells in immunodeficient mice, as we observed for human cell lines (Fig. 1e,f and Extended Data Fig. 1f and 3f).
Interestingly, we found that neutrophils accumulating in irradiated lungs at this early time point homogenously acquired a distinctive hyper-segmented nuclear morphology, a hallmark feature of neutrophil activation (Fig. 2d,e). To further characterise this activated status, we performed mass-spectrometry based proteomic profiling of isolated lung and bone marrow neutrophils from irradiated and control mice (Fig. 2f). Striking differences were observed in the proteome of neutrophils harvested from irradiated lungs, while the proteome of bone marrow neutrophils isolated from distal femurs of the two groups were highly comparable (Extended Data Fig. 3g,h), although a level of radiation in the bone marrow of ribs cannot be excluded. Consistent with their hyper-segmented nuclear morphology (Fig. 2d,e), Metacore analysis of over-represented proteins revealed that neutrophils from irradiated lungs acquire a pro-inflammatory, activated phenotype with an increase in granule protein content (Fig. 2g,h and Extended Data Fig. 3i). Thus, radiation exposure in the lungs triggers a profound local activation of neutrophils. Recent evidence indicates that neutrophils adapt accordingly to tissue-derived signals and have a variable life time in mouse tissues, approximately 10.1h in the lung 15 . To test a direct effect of radiation exposure on neutrophil activation and their persistence in the tissue, we gave mice a pulse of EdU 1h prior to irradiation, and analysed neutrophils from lungs and bone marrow (Extended Data Fig. 3j). Flow cytometry analysis revealed negligible EdU positivity 7 days post-irradiation in neutrophils from both compartments, implying complete neutrophil turnover during this time (Extended Data Fig. 3k). We concluded that activated neutrophils observed 7 days post-irradiation were not directly exposed to radiation, and that neutrophil recruitment and activation is likely triggered by the local lung environment.
Radiation-primed neutrophils support metastatic colonization
Pro-inflammatory neutrophils with hyper-segmented nuclei have previously been linked with anti-cancer activity, including in the context of radiotherapy 16,17 . To test whether radiationprimed neutrophils alter the metastatic proficiency of irradiated lung tissue, we used daily injections of an anti-Ly6G antibody to deplete neutrophils from mice exposed to lung irradiation and 4T1-GFP + cancer cells were intravenously inoculated 7 days post-irradiation (Fig. 3a). Strikingly, radiation-primed neutrophils strongly supported metastatic growth as their depletion dramatically reduced the tumour burden in the lung ( Fig. 3b-d).
Many factors are at play when lungs are injured by radiation, for instance the integrity of the vasculature is compromised, which enhances extravasation. Indeed, cancer cells were increased 72h after seeding in previously irradiated lungs, a time point when extravasation is complete, however, this was unaffected by the absence of neutrophils (Extended Data Fig. 4a-c). NETosis has been increasingly reported as a mechanism mediating neutrophil prometastatic functions via inducing extracellular matrix remodelling and directly increasing cancer cell growth [18][19][20][21] . However, we did not detect NETosis in lung neutrophils at various timepoints following radiation exposure (Extended Data Fig. 4d).
We reasoned there might be various non-mutually exclusive mechanisms by which neutrophil recruitment and activation could favour metastatic growth in irradiated lungs 12 . Radiation-primed neutrophils could boost the growth of cancer cells directly or by orchestrating a tumour-supportive tissue response. To test if neutrophil activation directly influences their interaction with cancer cells, we performed in vitro co-cultures using a porous three-dimensional (3D) Alvetex™ scaffold (Extended Data Fig. 4e). We reported previously that lung neutrophils support cancer cell growth in this co-culture system 2 , but no further boost was observed when neutrophils were harvested from irradiated lungs (Extended Data Fig. 4f). We also did not detect an increase in cancer cells EdU incorporation following 2D co-culture with radiation-primed lung neutrophils compared to control neutrophils (Extended Data Fig. 4g,h). These results suggest radiation-primed lung neutrophils do not offer an additional direct growth-promoting advantage to cancer cells compared to control neutrophils.
This prompted us to test the alternative hypothesis that radiation-primed neutrophils boosted metastasis by perturbing the lung microenvironment prior to the arrival of cancer cells. Therefore we performed targeted lung irradiations in neutropenic G-csf ko mice that are unable to mobilise neutrophils from the bone marrow 22 . Irradiated G-csf ko mice lungs (Fig. 3e), after which infiltrating neutrophils displayed a marked increase in nuclear segmentation indicating an activated phenotype similar to wild-type mice (Fig. 3f,g). rG-CSF treatment was stopped 48h before IV injection with MMTV-PyMT primary mammary cancer cells, by which time mice had returned to their original neutropenic state (Extended Data Fig. 5a,b). Importantly, long-term analysis revealed a significant increase in metastatic incidence in irradiated mice pre-treated with rG-CSF (Fig. 3h,i). Since the number of neutrophils was only increased prior to cancer cell seeding, this suggests that neutrophils may indirectly influence cancer cell growth in the irradiated lung by perturbing the tissue microenvironment.
Neutrophils influence the epithelial response to radiation
We next tested whether radiation-primed neutrophils alone were sufficient to drive a change in the healthy lung tissue microenvironment and influence metastatic receptiveness. We performed adoptive transfer of control or radiation-primed lung neutrophils into lungs of naïve recipient mice via a single IV injection (Fig. 4a). After four days of lung 'conditioning', at which time turnover of transferred neutrophils had occurred 15 , 4T1-GFP + cancer cells were seeded to induce metastasis (Fig. 4a). Strikingly, mice whose lungs were conditioned with radiation-primed neutrophils had a marked increase in cancer cell growth compared with mice conditioned with control lung neutrophils ( Fig. 4b-d). Thus, radiationprimed neutrophils can incite a tissue change in naïve lung tissue.
We then sought to uncover this neutrophil tissue-perturbation activity in the context of radiation exposure. We used flow cytometry to sort either epithelial (CD45 -CD31 -Ter119 -EpCAM + ) or mesenchymal (CD45 -CD31 -Ter119 -EpCAM -) cells from control and irradiated lungs in the presence or absence of neutrophils (Fig. 4e). As expected, RNA-sequencing showed significant alterations in the transcriptome of both cell types 7 days after irradiation, while a profound influence of neutrophils also emerged ( Fig. 4f and Extended Data Fig. 6a). Particularly in the lung epithelium, the absence of neutrophils resulted in a completely different irradiated cell cluster in the principle component analysis (PCA) (Fig. 4f). To test if the neutrophil-mediated transcriptional response was reflected in a functional difference in the epithelial compartment, we performed a lung organoid assay 23 . This is the gold-standard assay whereby lung progenitors are challenged to survive and grow organoid structures ex vivo. Lung epithelial cells were isolated from irradiated mice either in the presence or absence of neutrophils and co-cultured with lung fibroblasts in Matrigel (Fig. 4e,g). As expected, epithelial cells injured by radiation sharply reduced the overall number of organoids compared to non-injured cells 24 , yet strikingly epithelial cells irradiated in the absence of neutrophils were almost completely devoid of organoid forming ability (Fig. 4h,i). Thus, this data suggests that neutrophils are required to support lung epithelial fitness and progenitor function upon irradiation. To examine whether radiation-induced alterations in lung epithelial cells also functionally affects their interaction with cancer cells, we cultured them ex vivo on 3D Alvetex™ scaffolds with MMTV-PyMT-GFP + cancer cells (as in Extended Data Fig. 4e). After confirming that the survival of the epithelial cells on the scaffolds was not affected by irradiation, we found that the growth advantage provided by epithelial cells was almost entirely abolished when the irradiation occurred in the absence of neutrophils (Extended Data Fig. 6b-e). These functional differences in irradiated epithelial cells from neutrophil-depleted mice prompted us to further examine the transcriptional signatures of the different epithelial cell clusters we had distinguished by PCA analysis (Fig. 4f). We identified a distinctive cluster of genes whose expression was specifically enriched upon irradiation, and boosted by the presence of neutrophils (gene set B, Fig. 5a, left). In the mesenchymal compartment, a neutrophil-driven response was also evident, however this was less clear, and the presence of neutrophils appeared to have a greater impact on genes downregulated upon irradiation (gene set A, Extended Data Fig. 6f). Metacore pathway analysis of the neutrophil-promoted genes in epithelial cells (gene set B , Fig 5a, right) revealed an enrichment of Notch signalling, a critical regulator of stem cell proliferation and differentiation during lung development and tissue repair 25 , particularly in receptors Notch 1, 3 and 4 and the ligands Dll1 and Dll4 (Fig. 5b). The Notch signature was validated by reverse transcription with quantitative PCR (RT-qPCR), whereby a neutrophil-dependent induction of the canonical target genes Hes1 and Hey1 was also evident, as well as by immunofluorescence on isolated irradiated lung epithelial cells, where a reduction of intracellular activated Notch was detected in the absence of neutrophils (Extended Data Fig. 7a,b). Interestingly, in line with previous evidence of neutrophil-dependent vascular repair 15 , neutrophils also appeared to influence Notch genes in endothelial cells isolated from irradiated mice as shown by RT-qPCR (Extended Data Fig. 7c), although they were excluded during cell sorting for the RNA-seq analysis (Fig. 4e). Taken together, our findings suggest that the epithelial response to radiation-injury and their subsequent interaction with cancer cells is markedly influenced by the presence of neutrophils.
Radiation enhances Notch-signalling and cancer cell stemness
Given the enrichment in Notch signalling within the lung epithelium of irradiated mice ( Fig. 5a-c), we explored whether genetic Notch activation in control mice, mimicking the activation observed downstream of radiation-primed neutrophils, could recapitulate the pro-metastatic effect of radiation. To test this, we forced the activation of Notch specifically in the lung epithelium of non-irradiated mice by crossing Rosa26-NICD-IRES-GFP mice 26 with Sftpc-CreER mice that express Cre recombinase in lung alveolar type II cells. The resulting progeny were then crossed with the mammary tumour model MMTV-PyMT.
Following the onset of mammary tumours, we administered tamoxifen to drive constitutive expression of the Notch intracellular domain in the lung alveoli in the absence of radiation exposure (Fig. 5d, Extended Data Fig. 8a). As expected, forced activation of Notch in the lung had no impact on primary tumour growth (Extended Data Fig. 8b). In contrast, when we analysed the lungs two weeks post-tamoxifen, we observed a striking enhancement of spontaneous metastasis in mice with activated Notch (PyMT/Notch) compared to control mice (Fig. 5e,f), indicating that cancer cells directly profit from enhanced Notch signals within the lung epithelium. Notch activation, marked by expression of the target gene Hes1, was clearly observed in the epithelium adjacent to metastatic foci in PyMT/Notch lungs, but not in control mice (Fig. 5g). Importantly, Hes1 + alveolar epithelial cells co-expressing GFP were readily detectable within the metastatic lesions of PyMT/Notch mice and, within the metastatic environment, adjacent cancer cells also showed staining (Figure 5h). This suggests a local induction of Notch signalling in metastatic cells. Indeed, there was a marked enrichment of Hes1 expression in the tumour cells themselves, compared to tumour cells from control mice (Extended Data Fig. 8c). This suggests that persistent Notch activation in epithelial cells within the metastatic microenvironment of the lung can foster the growth of arriving tumour cells. Of note, in this context, any potential influence of lung neutrophils would be independent from their radiation-primed activity.
The identification of Notch activation within the lung alveolar compartment of irradiated mice (Fig. 5a-c, and Extended Data Fig. 7a,b), together with the evidence that these cells are part of the early metastatic niche 2 , prompted us to assess whether Notch-activated alveolar cells could be detected in the metastatic niche of pre-irradiated lungs. To test this, we utilised Cherry-niche labelling 4T1 cancer cells, which are able to identify neighbouring non-cancerous cells by mCherry uptake 27 (Fig. 6a). We isolated mCherry + non-cancer lung niche cells (CD45 -GFP -Cherry + ) and Cherry − distant lung cells (CD45 -GFP -Cherry -) from pre-irradiated or control lungs harbouring metastases (Extended Data Fig. 2) and performed single-cell RNA sequencing (scRNA-seq). Uniform Manifold Approximation and Projection (UMAP) analysis was used for the unbiased identification of epithelial, endothelial and mesenchymal cell clusters (Fig. 6b). We identified those clusters by the expression of EpCAM, CD31 (PECAM-1) and PDGFα/β, respectfully (Extended Data Fig. 8d). Excitingly, Notch signalling was enriched within the epithelial compartment of the metastatic niche (Cherry + ) compared to the distal lung epithelial cells (Cherry -) of irradiated mice (Fig. 6c). Thus, the Notch-high environment we observed at day 7 (prior to cancer cell seeding) becomes positively selected within the metastatic niche of breast cancer cells 14 days post radiation-induced lung injury. Notably, the endothelial compartment (which also displayed Notch activation upon irradiation, Extended Data Fig. 7c) also contributes to the Notch-high environment within the metastatic niche, although we did not detect an enrichment compared to the distal lung (Fig. 6c). These findings, together with the enrichment of Notch in metastatic cells surrounded by Notch-activated lung epithelium ( Fig. 5d-h), prompted us to examine Notch signalling within cancer cells growing in preirradiated lungs in the presence or absence of neutrophils (from Fig. 3a). Importantly, the nuclear localisation of the Notch downstream DNA-binding protein RBPJ and the expression of target gene Hes1 was strongly enriched in metastatic 4T1 cells growing in irradiated lungs compared to control tumours, and this was almost entirely abrogated by depleting neutrophils (Fig. 6d Notch signalling is critical for the regulation of stemness in cancer cells 28 , therefore we tested if metastases high in Notch activity also show an increase in stemness potential ex vivo. We isolated metastatic 4T1 breast cancer cells growing in the lungs of pre-irradiated (Notch high ) or control mice (Notch low ), and analyzed the colony forming activity of equal numbers of tumour cells in 2D and 3D (Fig. 6f). In both experimental settings, tumour cells exposed to pre-irradiated lungs showed a striking increase in colony forming capacity, suggesting an augmented stemness potential (Fig 6g,h). In support of these findings, we observed a significant enrichment in the expression of the stem cell transcription factor Sox9 in tumour cells growing in irradiated lungs compared to control mice, which was reduced in metastases in neutrophil-depleted lungs (in which Notch expression is lower) (Fig. 6i,j). Interestingly, Sox9 has also been reported as a direct target gene of Notch1 29 . Collectively, these data support the notion that Notch activation, which is part of the neutrophil-dependent Europe PMC Funders Author Manuscripts radiation injury response, may be exploited by cancer cells seeding in the lung to enhance their stemness potential.
Radiation-enhanced metastasis requires degranulation
We have shown that radiation-activated neutrophils show a pro-inflammatory phenotype and an increase in granule proteins (Fig. 2g,h), therefore we tested if this tissue-perturbation effect was mediated by their degranulation. We performed targeted lung irradiation in the presence of a degranulation inhibitor Nexinhib20 30 to prevent neutrophil exocytosis (Fig. 7a). Strikingly, this led to the abrogation of radiation-mediated metastatic enhancement and the reduction of Notch activation and Sox9 expression in metastatic 4T1 cells ( Fig. 7b-d). To test the contribution of a single granule protein, Neutrophil elastase (Ela2), in the neutrophil-mediated tissue response, we irradiated the lungs of Ela2-ko mice and C57BL/6 littermates and intravenously injected E0771 breast cancer cells one week later (Fig. 7e).
We observed a marked reduction in the metastatic proficiency of Ela2-ko lungs as well as a dampening of Notch activation ( Fig. 7f-h). Thus, neutrophils strongly influence the response of lung epithelial cells to radiation injury, likely via the process of degranulation.
Finally, to confirm that Notch signalling activation in metastatic cells growing within irradiated lungs is responsible for enhancing their tumorigenicity, we treated control/ irradiated mice with the γ-secretase inhibitor DAPT from the time of metastatic seeding (Fig. 8a). DAPT treatment led to a striking reduction in the growth of tumour cells in irradiated lungs (Fig. 8b,c), which coincided with a reduction in Hes1 expression (Fig. 8d).
In line with our observation that the initial radiation-injury is required to amplify Notch activity, indeed, no effect was found following Notch inhibition in control mice (Fig. 8c). Taken together, these data show that tumour cells growing rapidly within irradiated lungs is predominantly due to enhanced Notch signalling, which is mediated by radiation-primed neutrophils influencing lung epithelial cell responses (Fig. 8e).
Discussion
A fundamental step in the metastatic cascade is the generation of a favourable metastatic niche 31,32 . We have recently discovered that the early response of lung tissue to breast cancer growth involves the activation of regeneration 2 . Therefore, in the present study, we probed whether the induction of a regenerative state in the tissue due to an injury would trigger a favourable environment for metastatic growth. Radiotherapy remains one of the most effective means of achieving curative outcomes for cancer patients with localized disease. However, in many patients the frequent development of metastasis significantly limits the long-term treatment success. Currently, the biological responses of healthy tissues to radiation-induced injury, the key inflammatory components involved, and the influence of this response on the metastatic proficiency of that organ are incompletely characterized.
To address these questions, we designed a preclinical study using targeted thoracic irradiation to generate an acute radiation injury to the lungs prior to the seeding of breast cancer cells. We found that radiation exposure to healthy lung induced a hospitable environment for metastatic growth. Most importantly, we show a key function of neutrophils recruited to the injured lung, which locally activate and influence the response of resident lung cells. We observed that neutrophils play a critical role particularly for the lung epithelium and directly support the progenitor function of alveolar cells when tested ex vivo.
We found a neutrophil-dependent enhancement of stem cell signalling including Notch in lung epithelial cells. In the adult lung, Notch is required for the regeneration of several airway cell types after chemical or infection-induced injury, including basal cells 33 , club cells 34 , alveolar type I cells 35 and a population of lineage-negative epithelial progenitor cells 36 . Moreover, Notch signalling in response to irradiation was reported to promote survival of basal human and murine airway stem cells 37 . Here we identified an enrichment of Notch signaling within the metastatic niche of pre-irradiated lungs and demonstrate that the neutrophil presence results in Notch activation in metastatic cancer cells. Importantly, this induction of Notch signalling is a dominant effector enhancing tumorigenesis. In many types of human cancer including breast and lung, the Notch pathway is required for CSC self-renewal [38][39][40] . The transcription factor Sox9 is an essential regulator of self-renewal in CSCs, with its overexpression tightly correlated with metastasis formation and poor survival in multiple cancers 41,42 . Indeed, we observed the Notch hi Sox9 hi phenotype in breast cancer cells growing specifically within neutrophil-proficient irradiated lungs, along with their profoundly enhanced tumorigenicity. This suggests the induction of a regenerative program promoting cancer stemness.
We also observed a neutrophil-dependent boost in Notch signalling in irradiated endothelial cells, therefore we cannot exclude their involvement as an intermediary between the neutrophils and the epithelium. Indeed, neutrophils were shown to support lung vascular repair following whole-body irradiation 15 . In addition, the neutrophil-mediated change in the response of mesenchymal cells could also be relevant for metastatic outcome. Future studies are required to resolve the role of radiation-primed neutrophils in the perturbation of other cellular components, or determine if a similar activation occurs in other tissues exposed to radiation. Notably, as acute radiation injury can contribute to the onset of pneumonitis and fibrosis in the long-term, our data encourages further investigation into the role of radiation-primed neutrophils in these chronic pathologies.
Taken together, our study shows that a high level of radiation exposure in healthy lung tissue has profound effects on the tissue microenvironment that inadvertently increases its metastatic potential. Our findings place radiation-primed neutrophils as a key modulator of these pro-metastatic alterations.
Nowadays, the technologies used to administer conformal radiotherapy are highly effective in limiting both the dose and volume of tissue within the radiation field, making the treatment much safer for patients. We acknowledge that in our experimental setting, the volume of exposed tissue is considerably larger than in the clinic. Nonetheless, given the striking effects of radiation-primed neutrophils observed in this study using pre-clinical models, together with the abundant evidence of their multiple pro-tumour functions 12 , our work encourages closer attention to neutrophil responses in cancer patients receiving RT. Moreover, since the radiation-induced neutrophil activity is, at least in part, dependent on their degranulation, this study supports the interest in developing inhibitors of cancer cell exosome release for clinical use 43,44 .
Methods
All experiments in this study were approved by the Francis Crick Institute and University College London ethical review committees, and conducted according to UK Home Office regulations under the project license PPL80/2531 and PPL70/9032. Further information on research design, statistics and technical information is available in the Nature Research Reporting Summary linked to this article.
Statistics and reproducibility
All statistical analyses were performed using Prism (version 9.1.1, GraphPad Software). Graphic display was performed in Prism, and illustrative figures created with Biorender.com. A Kolmogorov-Smirnov normality test was performed before any other statistical test. After, if any of the comparative groups failed normality (or the number too low to estimate normality), a non-parametric Mann-Whitney test was performed. When groups showed a normal distribution, an unpaired two tailed t-test was performed. When groups showed a significant difference in the variance, we used a t-test with Welch's correction. When assessing statistics of 3 or more groups, we performed one way ANOVA controlling for multiple comparison or non-parametric Kruskal-Wallis. 3D co-cultures on scaffolds were assessed via two-way ANOVA.
Litter mice were randomized prior to radiation exposure into control or irradiation groups. Since in vivo treatments were performed in pre-determined groups (control vs. irradiation), no further randomization could be performed. In vitro 2D/3D co-culture assays and organoid assays were blinded at quantification. For all histological analysis, quantification between different experimental groups was performed blinded. No statistical methods were used to pre-determine sample sizes, but these are similar to those reported in our previous publications 32, 45 . All replicate/experiment numbers are clearly stated in the figure legends.
No data was excluded.
Mouse strains
Breeding and all animal procedures were performed in accordance with UK
Whole-lung targeted irradiation
Mice were anaesthetised with Fentanyl (0.05 mg/kg), Midazolam (5 mg/kg) and Medetomidine (0.5 mg/kg), then given a single 13 Gy dose of radiation (300 kV, 10 mA, 1 mm Cu filtration), targeted to the thoracic cavity. For fractionation experiments, 4Gy was delivered on 3 or 4 consecutive days. CT-based guidance was not used, but beam targeting to the lungs was achieved using a 7cm collimator (1 cm diameter) with a total field source distance of 20 cm. The machine was calibrated daily to achieve a consistent total dose. The dose was given at a rate of approximately 1.4 -1.5 Gy/min each time, with a radiation time of approximately 8 -9 minutes for 13 Gy and 2 -3 minutes for 4 Gy. Probes confirmed radiation outside the collimator was negligible prior to experiments. The anaesthetic was reversed with Naloxone (1.2 mg/kg), Flumenzenil (0.5 mg/kg) and Atipam (2.5 mg/kg). Recovery was performed in warming chambers at 37°C. Control mice received a sham-irradiation, whereby they were placed under anaesthetic for the same length of time, and recovered in warming chambers.
Targeted Partial-lung irradiations
An Xstrahl Small Animal Radiation Research Platform (SARRP) S/N 525722 irradiator (225kV x-ray tube, half value layer 0.847 mm Cu), with 0.1 mm integral Be filtration, was used for mouse treatments. Treatment was performed at University College London under project license PPL70/9032. CBCT was performed before each treatment to confirm target position. Each mouse was anaesthetized with isoflurane and positioned for optimal right lung targeting on a 3D printed bed. The bed was rotated between the x-ray source and a digital flat-panel detector. The images were obtained at 60 kVp and 0.8 mA with 1 mm Al filtration of an uncollimated primary beam. During rotation, 360 projections were acquired (approx. 1° increments for each projection, approx. 0.01 Gy total radiation dose). The CBCT projections were rendered into a 3D image reconstruction, using the FDK® algorithm with a voxel size from between 0.01 to 5 mm. Muriplan software was used to set radiodensity thresholds for different tissues and enable treatment planning, setting an isocenter to target the right lung individually and avoid the left lung, heart and trachea. Dose calculation was computed using a Monte Carlo simulation superposition-convolution dose algorithm, similar to those used clinically.
Mice received a single treatment of either of 8 Gy or 12 Gy, delivered as one angled beam, targeted to maximise dose delivery to a single lung. The beam used was 220 kV and 13 mA, filtered with 0.15 mm Cu, dose rate 2.37 Gy/min under reference conditions. The beam was collimated, to match the treatment volume, with beam cross section up to 10x10mm. The time calculated to deliver 8Gy to individual mice varied between 222 and 229 seconds, and to deliver 12 Gy varied between 318 and 348 seconds.
Cherry Labelling tool
The Cherry labelling tool developed by our laboratory has been recently described in detail 2,27 . A soluble peptide (SP) and a modified TAT peptide were cloned upstream of the mCherry cDNA, under the control of a mouse PGK promoter (sLP-Cherry). The sLP-Cherry sequence was then cloned into a pRRL lentiviral backbone. 4T1 cancer cells were stably infected with LP-Cherry and pLentiGFP lentiviral particles and subsequently sorted by flow cytometry to isolate 4T1-Cherry + GFP + labelling cells.
Induction of metastasis
To induce spontaneous metastases, mice were given an orthotopic injection of 50μl growth factor-reduced (GFR) Matrigel (BD Biosciences) containing 1 × 10 6 4T1 or 4T07 breast cancer cells in the left mammary fat pad, using a 29-gauge insulin syringe. The injections were performed under anaesthesia (isoflurane).
To induce experimental metastasis, 4T1 breast cancer cells (0.5 × 10 6 ), MMTV-PyMT primary mammary tumour cells (1 × 10 6 ), E0771 breast cancer cells (0.5 × 10 6 ) were re-suspended in 100μl PBS and intravenously injected into BALB/c, Gcsf Ko or C57BL/6J mice, respectively. For human cancer cell metastasis, Flo-1 oesophageal cancer cells (1 × 10 6 ) were intravenously injected into NSG mice, and H460 and A549 lung NSCLC cells (1 × 10 6 ) were injected into Nude BALB/c mice. All animals were monitored daily for unexpected clinical signs following the project license PPL80/2531 guidelines and the principles set out in the NCRI Guidelines for the Welfare and Use of Animals in Cancer Research (UK). The rationale and process for histological quantification of lung metastatic burden is provided in Supplementary File 1.
In vivo treatments
For neutrophil depletion, rat anti-Ly6G antibody (BioXcell, clone 1A8, 12.5 μg/mouse) in PBS or rat IgG isotype control (Cell Services Unit, Francis Crick Institute) was administered daily to BALB/cJ female mice via intraperitoneal injection. Recombinant GCSF protein (Novoprotein, C002, 5 μg/mouse in PBS) was administered subcutaneously to Gcsf Ko mice every other day for a total of four injections. For Notch inhibition, BALB/cJ female mice received daily intraperitoneal injections of the γ-secretase inhibitor DAPT dissolved in corn oil (Sigma, 10 mg/kg body weight), or a vehicle control. To inhibit neutrophil degranulation, BALB/cJ female mice given an intraperitoneal injection of Nexinhib20 30 (30 mg/kg in corn oil), or a vehicle control, three times per week for two weeks. For tamoxifen administration, tamoxifen (Sigma-Aldrich) was dissolved in corn oil in a 40 mg/ml stock solution. Three doses (0.2 mg per g body weight) were given via oral gavage over consecutive days, and mice were harvested two weeks after the final dose.
Tissue digestion for cell isolation or analysis
Lung tissue was minced manually with scissors and digested with Liberase TM and TH (Roche Diagnostics) and DNase I (Merck Sigma-Aldrich) in HBSS for 30 min at 37°C in a shaker at 180 rpm. Samples were passed through a 100 μm filter and centrifuged at 1250 rpm for 10 min. The cell pellet was incubated in Red Blood Cell Lysis buffer (Miltenyi Biotec) for 5 min at room temperature. After centrifugation, cells washed with MACS buffer (0.5% BSA and 250 mM EDTA in PBS) and passed through a 20 μm strainer-capped tube to generate a single cell suspension. Antibody staining was then performed for cell isolation (flow cytometry or magnetic-bead based cell separation), or for flow cytometry analysis.
FACS analysis and cell sorting
Mouse lung single-cell suspensions prepared as above were incubated with mouse FcR Blocking Reagent (Miltenyi Biotec) for 10 min at 4°C followed by incubation with fluorescently-conjugated antibodies for 30 min at 4°C (see Supplementary Table 1
Lungs were collected one week later, fixed in 4% PFA and embedded in paraffin. To quantify metastatic burden, 4μm serial sections from the entire lung were stained for haematoxylin & eosin (H&E). The total number of metastatic foci in each lung was counted manually, using a Nikon Eclipse 90i light microscope.
Lung organoid assay
Lung organoid co-culture assays have been previously described 46 . Lung epithelial cells (Epcam + CD45 -CD31 -Ter119 -) from control BALB/cJ mice or irradiated mice were FACS sorted and resuspended in 3D organoid media (DMEM/F12 with 10% FBS, 100 U/ml penicillin-streptomycin, and insulin/transferrin/selenium (Merck Sigma-Aldrich)). Cells were mixed with murine normal lung fibroblast (MLg) cells and resuspended in GFR Matrigel at a ratio of 1:1. 100μl of this mixture was pipetted into a 24-well transwell insert with a 0.4 μm pore (Corning). 5,000 epithelial cells and 25,000 MLg cells were seeded in each insert. After incubating for 30 min at 37°C, 500μl organoid media was added to the lower chamber and media changed every other day. Bright-field images were acquired after 14 days using an EVOS microscope (ThermoFisher Scientific), and quantified using FiJi (version 2.0.0-rc-69/1.52r, ImageJ).
Cancer cell stemness assay
Control and irradiated lungs harbouring 4T1 metastases were dissociated to a single cell suspension as described above. Cells were plated overnight (>12h) in DMEM with 10% FBS and 100 U/ml penicillin-streptomycin. The following day, wells were washed 4x in PBS to remove non-adherent cells, trypsinised and counted. For all experiments, a metastasisfree normal lung was dissociated and plated overnight in the same conditions to confirm non-adherence of non-cancerous cells. For 2D assays, cancer cells were resuspended in DMEM/10% FBS at a density of 1 × 10 3 cells/ml or 5 × 10 3 cells/ml and cultured in 6-well plates for 7 days. At endpoint, cells were fixed in chilled acetone and methanol (1:1) and stained with Giemsa (Merck). For 3D assays, 5 × 10 4 cancer cells/ml were resuspended in GFR Matrigel and 20 μl droplets seeded into 24-well plates. After a 30 min incubation at 37°C, 500 μl 3D growth medium (DMEM/F12 supplemented with 100 U/ml penicillin-streptomycin, 20 ng/ml EGF, 20 ng/ml bFGF, 4 μg/ml Heparin and B27 (50x)) was added. Media was replaced every 48h and bright-field images acquired at day 7 using an EVOS microscope.
3D cell culture
GFP + MMTV-PyMT cells were seeded (5,000 cells/well) in a collagen-coated Alvetex™ Scaffold 96-well plate (ReproCELL). The following day, lungs were harvested from control or irradiated BALB/cJ mice (7 days post-irradiation), and EpCAM + epithelial cells or Ly6G + neutrophils were isolated by MACS sorting. Cells were seeded on top of the cancer cells (50,000 cells/well). The growth of GFP + cells was monitored using the SteREO Lumar.V12 stereomicroscope (Zeiss), and images quantified using ImageJ. For quantification, the Li's Minimum Cross Entropy thresholding algorithm was performed on image stacks.
EdU in vitro proliferation assay
GFP + MMTV-PyMT cells were seeded at a density of 10,000 cells/well into collagen-coated 6-well plates in MEM media. The following day, Ly6G + lung neutrophils were isolated via MACS sorting from control or irradiated mice and added to the wells at a density of 100,000 cells/well. After 60h, wells were supplemented with 20 μM EdU (5-ethynyl-2'deoxyuridine, Sigma-Aldrich). Cells were harvested 6h later, and EdU incorporation assessed using the Click-iT Plus EdU Flow Cytometry Assay Kit (ThermoFisher Scientific), according to the manufacturer's instructions. Sample data were acquired on a BD LSR-Fortessa and analysed using FlowJo 10.4.2.
EdU in vivo
Mice were treated with a single intraperitoneal injection of EdU dissolved in PBS (25 mg/kg body weight). Mice received targeted lung irradiation 1h later, and were culled either 1h or 7 days post-irradiation. Lungs and bone marrow were harvested, and EdU incorporation assessed in single cell suspensions stained for Ly6G-PE (BD Biosciences, clone 1A8) using the Click-iT Plus EdU Flow Cytometry Assay Kit. Data was acquired on a BD LSR-Fortessa and analysed using FlowJo 10.4.2.
Immunohistochemistry & Immunofluorescence
Mouse lungs were fixed overnight in 4% PFA and embedded in paraffin blocks. 4μm tissue sections were cut, deparaffinised and rehydrated using standard methods. Antigen retrieval was performed using pH 6
Immunostaining quantification
To quantify Hes1, RBPJ and Sox9 expression within metastases, positive cells within each metastatic area were counted and normalised to tumour size by dividing the number of positive cells within each metastasis by the tumour area, as measured by ImageJ. A minimum of 20 images per lung were quantified. The intensity of RBPJ staining was determined using ImageJ. Following colour deconvolution, the mean grey value was measured specifically for the metastatic area within each image. The optical density was determined by the calculation OD = log 10 (max grey value/mean grey value). The resulting optical density measurements were averaged across all images per sample, generating an average intensity of RBPJ staining per lung.
Immunofluorescent staining on coverslips
EpCAM + epithelial cells were MACS-sorted from control, irradiated and neutrophildepleted irradiated lungs, plated on poly-lysine glass coverslips for 15 min at room temperature, and fixed in 4% PFA in PBS for 10 min. After fixation, cells were permeabilized with 0.1% Triton-X-100 in PBS for 5 min and incubated with a blocking solution (1% BSA, 5% goat serum in PBS) for 1h at room temperature. Next, cells were incubated for 1h at room temperature with an anti-activated Notch1 antibody (Abcam, ab8925) diluted in blocking solution followed by a 1h incubation with a goat anti-rabbit AlexaFluor 488 (1:500, ThermoFisher Scientific) at room temperature. Coverslips were then mounted onto slides using Vectashield Mounting Medium with DAPI for imaging. Images were acquired using a Zeiss Upright710 confocal microscope, and quantified using the MeasureObjectIntensity function in CellProfiler. For analysis of neutrophil nuclear segmentation, Ly6G + neutrophils were MACS-sorted from lung suspensions and plated on poly-lysine coverslips as described above. After fixation, coverslips were mounted onto slides using Vectashield Mounting Medium with DAPI and images captured with a Zeiss Upright710 confocal microscope. FiJi (version 2.0.0-rc-69/1.52r, ImageJ) was used to analyse fluorescence images.
Senescence-associated β-galactosidase staining
Senescence-associated β-galactosidase staining was performed on lung cryosections preserved in OCT freezing medium, using the Senescence β-galactosidase staining kit (cat#9860, Cell Signalling Technology), according to the manufacturer's instructions. Cryosections of 10 μm were fixed in a 2% formaldehyde and 0.2% glutaraldehyde solution in PBS at room temperature for 5 min. Sections were washed 3 times with PBS, and incubated for 48 hrs at 37°C with the staining solution containing X-gal in N-N-dimethylformamide (pH 6.0). Sections were counterstained with Nuclear Fast Red (cat no. H-3403, Vector), mounted in VectoMount™ AQ (cat no. H-5501, Vector) and imaged on a Nikon Eclipse 90i microscope. For quantification, the area of SA-β-gal + regions per lung was measured using ImageJ.
Quantitative Proteomics of neutrophils using TMT labelling
Ly6G + lung or bone marrow neutrophils were isolated by MACS-sorting. Cells were lysed by adding three cell-pellet volumes of lysis buffer (8 M urea, 50 mM TEAB, protease inhibitors) followed by sonication. Insoluble material was removed by centrifugation at 16,000 × g for 10 minutes at 4°C. Following a Bicinchonic Acid (BCA) assay for protein quantification, 25 μg of total protein was transferred into 6 labelled tubes and adjusted to a final volume of 25 μl with 50 mM TEAB. Reduction and alkylation was performed using DTT and iodoacetamide in 50mM TEAB. Proteins were then precipitated overnight by adding six volumes of ice-cold acetone. The next day, proteins were resuspended in 25 μl of 50mM TEAB and digested by trypsin for 16 hours at 37°C (1:50 w:w). Peptides were then desalted using a C18 macrospin column (The Nest Group Inc) and re-solubilised in 50 mM TEAB and subsequently labelled using a 0.2mg TMT 10-plex kit (Thermo Scientific). Peptide labelling efficiency was assessed using a QExactive orbitrap (Thermo Scientific) mass spectrometer to ensure that all samples displayed >98% labelling efficiency.
Next, 5 % hydroxylamine was added to each sample to sequester unreacted label and the samples were mixed. The 10-plex sample was desalted using a C18 macrospin column and peptides were eluted with 80 % acetonitrile and dried in a vacuum centrifuge. High pH reversed-phase fractionation (Pierce, UK) was employed with nine fraction collected and dried to completeness. Each fraction was analysed using a Fusion Lumos Tribrid orbitrap mass spectrometer coupled to an UltiMate 3000 HPLC system.
Quantitative RT-PCR
RNA was isolated from sorted lung populations using the RNeasy Mini kit (Qiagen), with an on-column DNase treatment (Qiagen). cDNA was synthesized with random primers, using the SuperScript III First Strand Synthesis Kit (Thermo Fisher Scientific) according to the manufacturer's protocol. The PCR, data collection and data analysis was performed on a QuantStudio™ 3 Real-time PCR system (Thermo Fisher Scientific), using a PowerUp™ SYBR Green Master Mix (Thermo Fisher Scientific). Glyceraldehyde 3-phosphate dehydrogenase (GAPDH) was used as an internal expression reference. Primer sequences are provided in Supplementary Table 2.
RNA sequencing sample preparation
Single cell RNA sequencing-Single cell suspensions of metastatic lungs from control or irradiated mice (n=10 per group, pooled) were prepared as described above. CD45 -Ter119cells were sorted by flow cytometry following staining with anti-mouse CD45, Ter119 and DAPI. Library generation for 10x Genomics analysis were performed using the Chromium Single Cell 3' Reagents Kits (10x Genomics), followed by sequencing on an Hiseq4000 (Illumina) to achieve an average of 50,000 reads per cell.
Bulk RNA sequencing-Lung CD45 -CD31 -Ter119 -EpCAM + and CD45 -CD31 -Ter119 -EpCAMcells were sorted from control, irradiated or neutrophil-depletion irradiated mice 7 days post-irradiation by flow cytometry. Total RNA was isolated using the miRNeasy Micro Kit (Qiagen, cat# 217084), according to the manufacturer's instructions. Library generation was performed using the KAPA RNA HyperPrep with RiboErase (Roche), followed by sequencing on a HiSeq (Ilumina), to achieve an average of 30 million reads per sample.
Bioinformatic analysis
Bulk RNA sequencing-Raw fastq files were adapter trimmed using CutAdapt 1.5 47 and trimmed reads mapped to GRCm38 release 86 with associated ensemble transcript definitions using STAR 2.5.2a 48 wrapped by RSEM 1.3.0 49 which was used to calculate estimated read counts per gene. Where necessary, bam files were merged using samtools 1.8 50 . Estimated counts from samples in the Epithelial and Mesenchymal groups were normalised separately, with normalisation and differential expression of genes being called between genotype group using the R package DESeq2 1.12.3. 51 . Genes with an adjusted p-value less than or equal to 0.05 were said to be differentially expressed. Differentially expressed genes were further analysed for their pathway enrichments using Metacore (version 21.3, https://portal.genego.com).
Single Cell RNA sequencing-Raw reads were first processed by the Cell Ranger v2.1.1 pipeline, using STAR (v2.5.1b) to align to the mm10 transcriptome, deconvolve reads to their cell of origin using the UMI tags and report cell-specific gene expression count estimates. All subsequent analyses were performed in R version 4.0.3 using the Seurat package (4.0.1) 52 . Datasets underwent initial filtering to remove genes expressed in fewer than 3 cells and cells with fewer than 200 detected genes. For the irradiated lung niche dataset (mCherry+ve), further filtering was performed by removing cells with: <500 genes, >3,000 genes, a mitochondrial gene content of >5.5% and a total number of reads >5300. Similarly, for the irradiated distal lung dataset, the cut-offs of <500 genes, >4000 genes, <6% mitochondrial gene content and <12000 total reads were used. Datasets were then normalised using the Seurat package SCTransform and integrated using the SelectIntegrationFeatures, PrepSCTIntegration, FindIntegrationAnchors and IntegrateData functions 53 . Dimensional reduction was performed using principal component analysis (PCA) on the integrated dataset. Unsupervised clustering was performed with the first 30 dimensions using the FindNeighbours and FindClusters functions and UMAP (Uniform Manifold Approximation and Projection) was used for cluster visualisation. Epithelial, endothelial and mesenchymal populations were identified based on expression of the markers Epcam, Pecam1 and Pdgfra and Pdgfrb respectively and FeaturePlot function was used for their visualisation. Cells with marker gene expression greater than 0.5 were selected for further analysis. Notch signalling pathway gene signature analysis was performed using the Reactome Signaling by Notch gene set within MSigDB. Single cell signature scores were calculated for niche and distal cells within each cell-type subset, using the Vision package v.2.1.0 54 .
Extended Data
Extended Data Fig. 1. Radiation exposure in healthy lung tissue enhances metastasis. a, Representative H&E images of metastatic lungs from control (UT) and irradiated BALB/c mice orthotopically injected with 4T1 breast cancer cells to generate a primary tumour. The metastatic area is depicted with a dashed line (n=4 mice per group, 2 independent experiments). Scale bar, 250 μm. b,c, FACS quantification of GFP + tumour cells in the metastatic lung (b) and primary tumour volume (c) from control and irradiated mice (n=4 mice per group, one experiment). d,e, Representative H&E images (d) and FACS quantification of GFP+ tumour cells (e) from metastatic lungs from control (UT) or irradiated FVB mice intravenously injected with GFP+ MMTV-PyMT primary mammary tumour cells at day 7 (n=5 per group, one experiment). Scale bar, 100 μm. f, Table and representative immunostaining of human-specific Lamin A/C to detect human lung cancer cells growing in BALB/c Nude mice (n=4 mice per group for each cell line tested). Mice were intravenously injected with H460 or A549 human NSCLC cells 7 days following targeted lung irradiation (or sham-irradiation for control mice) and lungs harvested 3 weeks later. The table depicts a-c, Example of FACS gating strategy to determine the frequency of (a) GFP + cancer cells or (b) Ly6G + neutrophils in the lung tissue of control/irradiated mice, or (c) Lineage -/ EpCAM + epithelial cells in the irradiated metastatic niche (Cherry-labelled, left), or unlabelled distant lung (Cherry-negative, right). All samples were gated to exclude debris and doublets, followed by live cell discrimination by DAPI staining. All gates were set based on fluorescence-minus-one (FMO) controls, containing all antibodies minus the one of interest, to determine the background signal. Importantly, lung tissue displays a high level of autofluorescence, which needs to be considered when excluding dead cells (an FMO-DAPI is critical for this). Fig. 3. Radiation exposure induces lung perturbations including neutrophil infiltration and activation. a,b, Representative immunofluorescent images (a) and quantification (b) of phospho-Histone H2A.X (Ser139) (green) and DAPI (blue) stained lungs from control/irradiated mice, 7 days post-irradiation (n=3 mice per group, one experiment). Scale bar, 10 μm. 6 fields of view were quantified per mouse. c,d, Representative images of senescenceassociated β-galactosidase (SA-β-gal) staining on lung cryosections (c) and quantification (d) from control/irradiated mice, 7 days post-irradiation (n=3 mice per group, one experiment). Scale bar, 25 μm. e, Quantification of immunostaining for S100A9 + neutrophils in metastases from control and irradiated lungs at day 14 (7 days post-IV, Figure 1d) (n=6 mice per group, 2 independent experiments). The number of neutrophils within the metastatic area was normalised to tumour area. f, FACS quantification of GFP + cancer cells Europe PMC Funders Author Manuscripts in control/irradiated lungs from RAG1-ko mice at day 14 (7 days post-IV). n=9 mice per group, two independent experiments, grey dots C57BL/6J and white dots FVB background. g,h, Volcano plots showing protein expression from irradiated versus control lung (g) and bone marrow (h) neutrophils. A selection of differentially expressed proteins in the lung are depicted in red, with the same proteins shown in bone marrow samples (n=3 mice per group). i, Table of represented as mean ± s.e.m. Statistical analysis by unpaired two-tailed t-test for (a) and
Supplementary Material
Refer to Web version on PubMed Central for supplementary material. a,b, Representative H&E images of lungs (a) and S100A9 immunostaining for neutrophils (b) from control and irradiated mice at day 7 (n=6 mice per group, 2 independent experiments). Scale bars, 250 μm (a) and 100 μm (b). c, Immune cell frequencies in the lungs estimated by FACS, 7 days post-irradiation (n=7 mice per group, two independent experiments). Data is presented either as the frequency among live cells (for total CD45 + immune cells), or frequency among CD45 + cells. d,e, Representative immunofluorescent images (d) and quantification of nuclear segmentation (e) from sorted Ly6G + lung neutrophils harvested from control and irradiated lungs at day 7 post-irradiation (n=7 mice per group, two independent experiments). DAPI staining was performed on fixed cells plated on poly-lysine coverslips (n=2 coverslips per mouse for each experiment). Each data point (n=30 UT, n=32 IR) represents the average segmentation across all cells within the field of view. Scale bars: main panel, 100 μm; enlarged insets, 10 μm. f, Experimental setup for quantitative mass-spectrometry based proteomic analysis of Ly6G + positive cells. Cells were isolated from the lungs and from bone marrow extracted from the femur of control and irradiated BALB/c mice at day 7 by MACS sorting. g, Metacore pathway analysis of proteins upregulated by >1.2 fold in lung neutrophils from irradiated mice, compared to control mice (n=3 mice per group). h, Granule protein upregulation in the lungs of irradiated versus control mice (n=3 mice per group). Each dot represents an individual granule protein (n=24), red dots depict an enrichment in irradiated mice, blue dots represent downregulation. All data represented as mean ± s.e.m. Statistical analysis by a two-tailed t-test with Welch's correction for (c) and unpaired two-tailed t-test for (e). Gating strategies for FACS analysis provided in Extended Data Fig. 2. Source data. a, Experimental setup for the neutrophil adoptive transfer. Control or radiation-primed Ly6G + lung neutrophils were MACS-sorted 7 days following irradiation and intravenously injected into naïve recipient BALB/c mice. Mice were given an intravenous injection of 4T1-GFP + cancer cells 4 days later, and metastatic burden assessed after one week. b-d, Quantification of the entire lung (serial sectioning) (b) and representative images of GFP (c) and H&E stained lungs (d) (n=6 control, n=7 irradiated mice, two independent experiments). Metastatic foci are outlined and indicated with arrows. See methods for quantification details. Scale bar, 100 μm. e, Schematic depicting the experimental setup for bulk RNA-sequencing. BALB/c mice were given daily injections of anti-Ly6G to deplete neutrophils or a control IgG antibody, beginning the day before targeted lung irradiation. Flow cytometry was used to isolate CD45 -CD31 -Ter119 -(Lin -)EpCAM + lung epithelial and Lin -EpCAMmesenchymal cells from control, irradiated and neutrophil-depleted irradiated mice 7 days post-irradiation (n=4 mice per group). f, Principle Component Analysis (PCA) of Lin -EpCAM + epithelial cell signatures following RNA-seq analysis of control, irradiated and neutrophil-depleted irradiated lung samples. Each dot represents an individual mouse, with ovals enclosing the samples from each group to highlight their similarity in the PCA plot. g, Experimental setup for lung epithelial analysis. Lin -EpCAM + lung epithelial cells harvested from control, irradiated and neutrophil-depleted irradiated BALB/c mice (7 days post-irradiation) were sorted by flow cytometry and co-cultured in Matrigel with MLg normal lung fibroblasts to generate lung organoids. h,i, Representative images (h) and quantification (i) of lung organoid co-cultures. Scale bar, 1000 μm. Quantification of organoid number is shown as the percentage reduction in organoids compared to the control group. Each dot represents an individual mouse, with the three independent experiments indicated by coloured dots (n=12 mice per group). Triplicate technical replicates were quantified for each mouse. Data represented as mean ± s.e.m. Statistical analysis by two-way ANOVA for (b) and non-parametric two-tailed Mann-Whitney test for (h). UT, untreated; IR, irradiation. Gating strategies for FACS sorting provided in Extended Data Fig. 2. Source data. a, Heatmap of Lin -EpCAM + lung epithelial cells from control, irradiated and neutrophildepleted irradiated mice (left) (hierarchically clustered samples in columns and genes in rows, n=4 mice per group), and Metacore pathway analysis (right) of the genes triggered by radiation, but influenced by neutrophils (gene set B, indicated by the arrow). b, Heatmap showing selected genes from gene set B from (a) (hierarchically clustered samples in columns and selected genes in rows). c, Representative images for Hes1 immunostaining in lung tissue from control or irradiated mice at day 7. Positive cells are indicated by arrows in the enlarged inset. Scale bar, 100 μm (main image), 10 μm (inset) (n=4 mice per group, one experiment). d, Experiment setup for genetic Notch activation. PyMT/Notch mice (MMTV-PyMT/SPC-Cre-ER T2 /Rosa26 NICD-IRES-GFP ) or PyMT/Control mice were administered tamoxifen by oral gavage (40 mg/kg) over three consecutive days to drive Cre expression in lung alveolar cells. Lungs were harvested 14 days after the last tamoxifen dose and assessed a, Cherry-niche labelling tool. Irradiated BALB/c mice were intravenously injected with GFP + 4T1-sLP-Cherry-labelling cancer cells 7 days post-irradiation. One week later, GFP -CD45 -Ter119 -mCherry + (red) and GFP -CD45 -Ter119 -mCherrycells (grey) were FACSsorted from metastatic lungs, representing the labelled metastatic niche or distant lung cells, respectively. b, Combined Uniform Manifold Approximation and Projection (UMAP) plot of cells from the mCherry + niche and mCherrydistant lung (n=10 mice, pooled). Clusters representing epithelial cells (EpCAM + ), fibroblasts (Pdgfrα/β + ) and endothelial cells (Pecam1(CD31) + ) are outlined. c, Notch signalling (Reactome) score in irradiated Cherry + niche (red) and Cherrydistant lung (blue) cells for the compartments in (b), calculated in VISION (methods). d,e, Quantification of RBPJ (d) and Hes1 (e) immunostaining in lung metastases from control, irradiated, and neutrophil-depleted irradiated mice at day 14 (n=7 mice per group, two independent experiments). Representative images in Extended Data Fig. 8e, quantification in methods. f, Cancer stemness assays. Metastatic lungs from a, Schematic depicting the treatment of control and irradiated mice with the neutrophil degranulation inhibitor Nexinhib20. Control or irradiated BALB/c mice were treated with Nexinhib20 or a vehicle control 3x per week (30 mg/kg), beginning the day before irradiation. GFP + 4T1 breast cancer cells were intravenously injected at day 7 and metastatic burden assessed at day 14. b, Lung metastatic burden in control and irradiated mice treated with Nexinhib20 or vehicle, quantified as tumour area by H&E (see Supplementary File 1) (n=6 mice per group, 2 independent experiments). c,d, Quantification of Hes1 (c) and Sox9 (d) immunostaining in metastatic lesions from (b) (n=6 mice per group, two independent experiments) e, Experimental setup of Ela2ko mouse model. Ela2ko mice or C57BL/6J wild-type littermates received a 13 Gy dose of targeted whole-lung irradiation. Mice received an intravenous injection of E0771 breast cancer cells at day 7, and lungs were harvested at day 14 to assess metastatic load. f-h, Representative H&E staining (f) quantification of metastatic number (g) and Hes1 immunostaining in metastatic lesions (h) from the irradiated lungs of Ela2ko or wild-type control mice (n=6 mice per group, one experiment), Scale bar, 250 μm. All data represented as mean ± s.e.m. Statistical analysis by one-way ANOVA for (b-d), and a two-tailed t-test for (g) and (h). CTL, control; IR, irradiated; Veh, vehicle; Nex, Nexinhib20; IV, intravenous; Ko, knock-out. Metastases quantification process outlined in Supplementary File 1, immunostaining quantification in methods. Source data. The quantification of Ki67 + metastatic foci is depicted as fold change, relative to the control mice treated with vehicle. Ki67 + metastatic foci are indicated with a dashed line. Scale bar, 250 μm. d, Quantification of Hes1 immunostaining in metastatic lesions from (b) (n=8 mice per group, two independent experiments) (see methods). All data represented as mean ± s.e.m. Statistical analysis by one-way ANOVA for (c), and a two-tailed t-test for (d). UT, untreated; IR, irradiated; Veh, vehicle. e, Proposed model for radiation-enhanced lung metastasis. Radiation exposure in healthy lung tissue leads to excessive neutrophil accumulation and activation, inducing an array of tissue perturbations such as Notch activation within epithelial cells. Together these alterations foster a pro-tumorigenic milieu within the irradiated lung, fuelling the subsequent growth of arriving metastatic cancer cells. Source data. | 2022-03-01T06:23:07.821Z | 2022-01-13T00:00:00.000 | {
"year": 2022,
"sha1": "f9e09c8f3ac1b91dbefd8cf43421dd62f0642cab",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "997a77f619f4dcbbfd4795754af48200e660d9cb",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
56145003 | pes2o/s2orc | v3-fos-license | Review of A* (A Star) Navigation Mesh Pathfinding as the Alternative of Artificial Intelligent for Ghosts Agent on the Pacman Game
Shortest pathfinding problem has become a populer issue in Game’s Artificial Intelligent (AI). This paper discussed the effective way to optimize the shortest pathfinding problem, namely Navigation Mesh (NavMesh). This method is very interesting because it has a large area of implementation, especially in games world. In this paper, NavMesh was implemented by using A* (A star) algorithm and examined in Unity 3D game engine. A* was an effective algorithm in shortest pathfinding problem because its optimization was made with effective tracing using segmentation line. Pac-Man game was chosen as the example of the shortest pathfinding by using NavMesh in Unity 3D. A* algorithm was implemented on the enemies of Pac-Man (three ghosts), which path was designed by using NavMesh concept. Thus, the movement of ghosts in catching Pac-Man was the result of this review of the effectiveness of this concept. In further research, this method could be implemented on several optimization programmes, such as Geographic Information System (GIS), robotics, and statistics.
INTRODUCTION
Pac-Man has become a legendary video game over 1980s.The most interesting case in Pac-man is the intelligent behavior of three ghosts (Pac-Man's enemies).They walk and catch Pac-Man wherever he goes.Three ghosts are the agents or NPC on which Artificial Intelligence (AI) algorithm is embedded.Previously, most of the algorithm used on AI is Dijkstra [1].
Shortest pathfinding is a determining process of several objects' movement to another position without collision [2].Actually, this method is not only used in AI game, but also used in other fields, such as Geographic Information System (GIS), robotics, and statistics, among others.Several theories have been delivered to solve the pathfinding problem, such as Finite State Machine (FSM), Graph, Dijkstra, and A*.FSM is the simplest algorithm which can be implemented in this pathfinding problem but its connection is large and the movement is easy to be predicted.Other algorithm which can be implemented in pathfinding problem is Graph.Graph makes several waypoints depending on the weight of graph value.Other than Graph there is Dijkstra algorithm.It uses greedy principle.Greedy principle on Dijkstra algorithm shows that on each step, we choose the side which has minimum weight and put it into a set of solutions [2].
Today's industrial development requires optimal shortest pathfinding algorithm in solving the problem because their resources have developed very complex.To solve the problems in pathfinding algorithm, game industries have made several research and removed several parts which are not used.Finally, they find the solution namely Navigation Mesh (NavMesh).But, NavMesh can only select a reasonable path for each moving object.Navigation mesh is a technique to represent a game world using polygons.Due to its simplicity and high efficiency in representing the 3D environment, navigation mesh has become a mainstream choice for 3D games [2].For its implementation, this paper examined NavMesh concept in Unity 3D game engine.
RELATED WORKS
To support the basic theory about ghost navigation (NPC in Pac-man), several concepts (research) about basic navigation mesh (NavMesh) and its features on unity 3D are discussed as follows.
Navigation Mesh
Navigation mesh has become a popular concept which is used in shortest pathfinding problem of 3D games because 3D environment mostly uses polygon structure.In Navigation Mesh, the properties of polygon object or terrain can guarantee a free-walk for a game character as long as the character which stays in the same polygon is a set of complex polygon [2].Pathfinding with waypoint graph and navigation mesh [3] As shown in Figure 2.1, there are differences between shortest pathfinding problems which use waypoint Graph and Navigation Mesh. Figure 2.1 (a) shows how graph algorithm takes a path in movement from start to goal.Graph chooses the nearest point based on the weight value which is established.There is no optimization after the process.But, if we compare it with NavMesh concept in Figure 2.1(b), the character moves from the start point to the destination/goal in a straight line.In this case, the start point is not in the same polygon as that of the goal point.The character needs to determine to which polygon it will then go.It repeats this step until both the character and the goal are located in the same polygon.Finally, the character can move to the destination in a straight line [2].
Pathfinding process in navigation mesh can be implemented with several algorithms, but the most effective and popular today's algorithm which is implemented for shortest pathfinding problem is A* (A Star) algorithm.A* is a generic search algorithm that can be used to find solutions for many problems, and pathfinding is one of them.For pathfinding, A* algorithm repeatedly examines the most promising unexplored location it has seen.When the location explored is the goal, the algorithm is finished.Otherwise, it has noted all location around for further exploration.A* is probably the most popular pathfinding algorithm in today's AI (Artificial Intelligence) game(s) [3].The triangulation optimization as shown in Figure 2.5 is made by using the minimum angle of a triangle which must be maximized.In this picture, the optimization is made by using effective tracing with segmentation line.S point symbol is the start and g point symbol is the goal.Pathfinding passes the start point to the goal point.In this case, firstly, s goes through v1 and v2 respectively.Then, on the next step, v1 goes through v3 (v3 is recognized as the shortest path which is close to the goal point).After that, Figure 2.5 (b) shows that tracing line is updated from s to v3 straightly because v3 has already established a point in the first post segmentation, then the path line from v1 to v3 is removed, and v3 must decide the next point that is traced as the start point in the previous segmentation (See figure 2.5 (c)).Finally, it continues until it reaches the goal point.The optimum path does not cross any triangle more than once.
Navigation System in Unity 3D
Unity is a cross-platform game engine which is developed and founded by Unity Technologies in 2005.Unity has released several versions, one of which is the newest version, namely 5.2.2 which has new feature called NavMesh path library.As shown in Figure 2.6, there are some components which implement NavMesh system in Unity, such as: NavMesh, NavMesh Obstacle, Navmesh Agent, and OffMeshLink.NavMesh is a polygon structure which describes the walkable surfaces of the game world and allows us to find path from one walkable location to another one in the game world; NavMesh Agent is a component which can move towards the goal on NavMesh surface, so the agent can avoid the obstacle; NavMesh Obstacle is an object which should be avoided by the agent's movement; and OffMeshLink is a connection point which allows us to incorporate navigation shortcuts which cannot be represented when using a walkable surface.
SYSTEM DESIGN
In the system design process, we designed the area of PacMan Game, then determined the obstacle, and created the actors including player, agent/NPC, and bonus actors.As shown in Figure 3.1, there were five steps to examine the movement of agent on navigation mesh.Game arena was made in unity using a terrain.We created the obstacle by using object as the wall.Then, we determined the path of NavMesh surfaces and the outside area which were unwalkable by the actors, and then, we created and determined the actor (red color) as the pac-man, the white ball as the ghosts (pac-man's enemies), and the orange box as the bonus point.After that, we put the ghosts randomly in the arena and we connected to the actor with NavMesh A* as a pathfinding agent.If one of the ghosts collided with the actor, the health point of the actor would be decreasing.But if the health point became 0 (zero), the game was over.
RESULTS A. Dijkstra and A* concept in shortest pathfinding
To ensure the effectivity of A* implementation, the comparison between Dijkstra and A* algorithm for searching the pathfinding was shown in the Figure 4.1.Figure 4.1 showed how dijkstra algorithm worked to find the goal.In this picture, the searching process was reached with radial search segmentation.The start agent caught the goal by detecting paths arround it until the goal agent entered the inside of the radial search area.Thus, the agent took the path towards the goal.This method was different from A* algorithm as shown in Figure 4.2.
C. A* implementation in Pacman Game using Unity 3D
By placing several ghost agents randomly, we could detect the behavior resulted by navigation mesh step by using A* algorithm implemented to the character.To know how this method ran effectively, we captured several experiments as follows: 1. Placing the actor in the center between two obstacles/walls.
The result was that the enemy ran towards the player straightly.Detailed result was shown in 2. Placing the actor near the border of the arena and letting one of the enemies stayed beside another wall.
The result was that the enemy walked to the actor through the obstacle edge, meaning that it took shorter path rather than enemy's way to the actor.
CONCLUSION
It is concluded that Navigation Mesh (NavMesh) and A* algorithm had become the best solution in solving the shortest pathfinding problem for today's industrial need.
The comparison between dijkstra and A* shortest pathfinding problem was different in the concept of pathfinding's area.Dijkstra detected the goal by radial detection mode, while A* used one radial's layer around it, and then pointed the best direction to the nearest point of the goal.It repeated until it caught the goal.
As comparison on Figure 4.1, 4.2, and 4.3 experiment, navigation mesh with A* algorithm was the more effective pathfingding than Djikstra algorithm.The data of comparison was shown in Table 1.The implementation of navigation mesh (NavMesh) process in 3D polygon surfaces was the key of effective mapping search.By using this method, the agent did not need to explore the unwalkable area.Therefore, updating path with straight line and minimizing the corner point in connecting start towards the goal using A* algorithm had become the best way and the core of effectiveness in A* optimization.
Figure 2 .
Figure 2.2 is the example of NavMesh implementation using P symbol as walkable area and B as blocked area/obstacle.In this mesh navigation, polygons with C symbols are chosen for navigation way of start point (Ps polygon) towards the goal point (Pg polygon); while in the navigation mapping, NavMesh needs optimum algorithm to reach the goal.First, A* algorithm chooses the shortest path by making connections among walkable polygons.Therefore, A* is able to detect the walkable polygons from the start point to the goal.Generally, A* chooses one point in every walkable polygon and then connects them until the start and the goal points make one connected line.
Figure 2 . 2 Figure 2 . 4 .
Figure 2.3 shows that the yellow graph is the path point connection which can be passed by available NavMesh; while the blue line is the shortest path which is determined by A* algorithm.In this process, A* shortest pathfinding problem has not been optimized because every walkable polygon point still has weaknesses, for example the polygon with a wide area.There are several solutions of shortest pathfinding which are delivered by A* algorithm, it uses centroid point, edge midpoint, or obstacle point as shown in Figure 2.4.
Figure 4 .
Figure 4.2 showed how the start agent searched and caught the goal.The agent just tried in one radial layer around it, then pointed the best direction to the nearest point of the goal until it caught the goal.So, if it was compared to Dijkstra pathfinding concept, A* had more efficient way and its pathfinding's time was shorter than Dijkstra pathfinding.
Figure 4 .
3 showed the differences between Dijkstra and A* method in the shortest pathfinding.
Figure 4 .
3 showed how the agent caught the goal by using Dijkstra algorithm, while A* algorithm caught the goal by using wall obstacle.The colors of pattern which existed around the start agent showed how the agent detected the goal.
Figure 4.3 (a) showed that the discovery area was larger than what was shown on Figure 4.3 (b) which was very simple and effective.
Figure 4 . 4 .
Figure 4.4.The movement of the enemy, it walked straightly towards the Actor (Red ball)
Figure 4 . 4 .
Figure 4.4.The movement of enemy, it walked through the obstacle edge towards the Actor (Red ball) | 2018-12-10T23:05:35.516Z | 2016-08-03T00:00:00.000 | {
"year": 2016,
"sha1": "d4dce2a09288cba5f55b83b508f9c5ed84ed19c2",
"oa_license": "CCBYNCSA",
"oa_url": "https://emitter.pens.ac.id/index.php/emitter/article/download/117/56",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d4dce2a09288cba5f55b83b508f9c5ed84ed19c2",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
16480411 | pes2o/s2orc | v3-fos-license | Drinfeld J Presentation of Twisted Yangians
We present a quantization of a Lie coideal structure for twisted half-loop algebras of finite-dimensional simple complex Lie algebras. We obtain algebra closure relations of twisted Yangians in Drinfeld J presentation for all symmetric pairs of simple Lie algebras and for simple twisted even half-loop Lie algebras. We provide the explicit form of the closure relations for twisted Yangians in Drinfeld J presentation for the ${\mathfrak{sl}}_3$ Lie algebra.
Introduction
Yangian Y(g) is a flat quantization of the half-loop Lie algebra L + ∼ = g[u] of a finite dimensional simple complex Lie algebra g [Dri 85]. The name Yangian is due to V. G. Drinfel'd to honour C. N. Yang who found the simplest solution of the Yang-Baxter equation, the rational R matrix [Yan 67] (see also [Bax 72,Bax 82]). This R matrix and the Yang-Baxter equation were discovered in the studies of the exactly solvable two dimensional statistical models and quantum integrable systems. One of the most important result was the quantization of the inverse scattering method by Leningrad's school [FST 79] that lead to the formulation of quantum groups in the so-called RTT formalism [FRT 89]. These quantum groups are deformations of semi-simple Lie algebras and are closely associated to quantum integrable systems. In particular, the representation theory of the Yangian Y(sl 2 ), which is one of the simplest examples of an infinite dimensional quantum groups, is used to solve the rational 6-vertex statistical model [Bax 82], the XXX Heisenberg spin chain [FadTak 81], the principal chiral field model with the SU (2) symmetry group [FadRes 86, Mac 04].
The mathematical formalism for quantum groups and for quantization of Lie bi-algebras was presented by Drinfel'd in his seminal work [Dri 85] (see also [Dri 87]). Drinfel'd gave a quantization procedure for the universal enveloping algebra U(g) for any semi-simple Lie algebrag. 1 The quantization is based on the Lie bi-algebra structure ong given by a skew symmetric map δ :g →g ∧g, the cocommutator. A quantization of (g, δ) is a (topological) Hopf algebra (U (g), ∆ ), such that U (g)/ U (g) ∼ = U(g) as a Hopf algebra and where σ • (a ⊗ b) = b ⊗ a and X is any lifting of x ∈g to U (g). The Lie bi-algebra structure ong can be constructed from the Manin triple (g,g + ,g − ), whereg ± are isotropic subalgebras ofg such thatg + ⊕g − =g as a vector space andg − ∼ =g * + , the dual ofg + . Then the commutation relations of the quantum group can be obtained by requiring ∆ to be a homomorphism of algebras U (g) → U (g) ⊗ U (g). The question of the existence of such quantization for any Lie bi-algebra was raised by Drinfeld in [Dri 92] and was answered by P. Etingof and D. Kazhdan in [EtiKaz 96]. They proved that any finite or infinite dimensional Lie bi-algebra admits a quantization. Here we will consider only the Yangian case, U (g) = Y(g) withg = L + . We will use the so-called Drinfel'd basis approach which is very convenient to approach the quantization problem.
In physics, quantum groups are related to unbounded quantum integrable models and their extensions to models with boundaries. The underlying symmetry of the models with boundaries is given by coideal subalgebras of quantum groups that were introduced in the context of 1+1 dimensional quantum field theories on a half-line by Cherednick [Che 84] and in the context of one dimensional spin chains with boundaries by Sklyanin [Skl 88] in the so-called reflection algebra formalism. Mathematical aspects of reflection algebras in the RT T formalism, called twisted Yangians, were first considered by G. Olshasnkii in [Ols 90] and were models with boundaries in [BasBel 11, BelFom 12]. Twisted Yangians in Drinfel'd basis can also be used to solve the spectral problem of a semi-infinite XXX spin chain for an arbitrary simple Lie algebra using the 'Onsager method' [BasBel 10]. We also remark, that twisted Yangians of this type were shown to play an important role in quantum integrable systems for which the RT T presentation of the underlying symmetries is not known, for example in the AdS/CFT correspondence [MacReg 10, MacReg 11].
The paper is organized as follows: in section 2 we recall basic definitions of simple complex Lie algebras and define the symmetric pair decomposition with respect to involution θ. Then, in section 3, we recall definitions of a half-loop Lie algebra L + of g given in the Drinfel'd basis, and introduce the Drinfel'd basis of a twisted half-loop Lie algebra H + with respect to the symmetric pair decomposition of L + . In section 4 we construct the Lie bi-algebra structure on L + and Lie bi-ideal structure on H + that provide the necessary data to achieve the quantification presented in section 5. The special case g = sl 3 is fully considered in section 6, where we have presented all the corresponding twisted Yangians. Section 7 contains the proofs which were omitted in the main part of the paper due to their length and for the convenience of the reader.
Acknowledgements: The authors would like to thank P. Baseilhac, N. Crampé, N. Guay, N. Mackay and J. Ohayon for discussions and their interest for this work. V.R. acknowledges the UK EPSRC for funding under grant EP/K031805/1.
Definitions and preliminaries
2.1. Lie algebra. Consider a finite dimensional complex simple Lie algebra g of dimension dim(g) = n, with a basis {x a } given by (2.1) [x a , x b ] = α c ab x c , α c ab + α c ba = 0, α c ab α e dc + α c da α e bc + α c bd α e ac = 0. Here α c ab are the structure constants of g and the Einstein summation rule of the dummy indices is assumed. We will further always assume g to be simple. Let η ab denote the non-degenerate invariant bilinear (Cartan-Killing) form of g in the {x a } basis (2.2) (x a , x b ) g = η ab = α d ac α c bd , that can be used to lower indices {a, b, c, . . .} of the structure constants α d ab η dc = α abc with α abc + α acb = 0.
The inverse of η ab is given by η ab and satisfies η ab η bc = δ c a . Set {a, b} = 1 2 (ab + ba). Let C g = η ab {x a , x b } denote the second order Casimir operator of g and let c g be its eigenvalue in the adjoint representation. For a simple Lie algebra it is non-zero and is given by The elements α bc a satisfy the co-Jacobbi identity, which is obtained by raising one of the lower indices of the Jacobi identity in (2.1). Moreover, contracting α bc a with the Lie commutator in (2.1) gives 2.2. Symmetric pair decomposition. Let θ be an involution of g. Then g can be decomposed into the positive and negative eigenspaces of θ, i.e. g = h ⊕ m with θ(h) = h and θ(m) = −m, here dim(h) = h, dim(m) = m satifying h + m = n. Numbers h and m correspond to the number of positive and negative eigenvalues of θ. This decomposition leads to the symmetric pair relations From the classification of the symmetric pairs for simple complex Lie algebras it follows that the invariant subalgebra h is a (semi)simple Lie algebra which can be decomposed into a direct sum of two simple complex Lie algebras a and b, and a one dimensional centre k at most. We write h = a ⊕ b ⊕ k (see e.g. section 5 in [Hel 78]). Set dim(a) = a and dim(b) = b. Let the elements with i = 1 . . . a, i ′ = 1 . . . b and p = 1 . . . m, (2.5) be a basis of g such that θ(X α ) = X α for any α ∈ {i, i ′ , z}, and θ(Y p ) = −Y p . We will further use indices i(j, k, ...) for elements X α ∈ a, primed indices i ′ (j ′ , k ′ , ...) for elements X α ∈ b, index α = z for the central element X α ∈ k, and indices p(q, r, . . .) for Y p ∈ m, when needed. We will denote the commutators in this basis as follows: The structure constants above are obtained from the ones of g by restricting to the appropriate elements.
Here and further we will use the sum symbol α to denote the summation over all simple subalgebras of h. The Einstein summation rule for the Greek indices will be used in cases when the sum is over a single simple subalgabera of h. The notation α = γ means that indices α and γ correspond to different subalgebras of h. The structure constants given above satisfy the (anti-)symmetry relations f γ αβ + f γ βα = 0, g q µp + g q pµ = 0, w µ pq + w µ qp = 0, and the homogeneous and mixed Jacobi identities , w β pq f µ αβ + g r αp w µ qr − g r αq w µ pr = 0, with α(β, γ, ...) = i(j, k, ...) ∈ a or α(β, γ, ...) = i ′ (j ′ , k ′ , ...) ∈ b and α (w α pq g s rα + w α qr g s pα + w α rp g s qα ) = 0, g r pα w β qr − g r qα w β pr = 0 for α = β. (2.8) We will further refer to the set {X α , Y p } = {X i , X i ′ , X z , Y p } given by (2.5) and satisfying relations (2.6-2.8) as to the symmetric space basis for a given Lie algebra g and involution θ.
The Killing form of g has a block diagonal decomposition with respect to the symmetric space basis, namely with the remaining entries being trivial. The Casimir element C g in this basis decomposes as The decomposition of the inverse Killing form can be used to raise the indices of the structure constants. We set f βγ α = κ βµ f γ αµ , g qν p = κ νρ g q ρp , w pq ν = (κ m ) pr g q νr . with α(β, γ, µ...) = i(j, k, ...) or i ′ (j ′ , k ′ , ...) and ν(ρ) = i(j) or i ′ (j ′ ) or z(z). Consider the commutation relations. For the generator Y p we have The remaining commutation relations are trivial. Let c a , c b , c z and c m be the eigenvalues of C, C ′ , C z and C Y respectively. We have c g = c a + c b + c m + c z . Using (2.4) we find 3. Symmetric spaces and simple half-loop Lie algebras 3.1. Half-loop Lie algebra. Consider a half-loop Lie algebra L + generated by elements {x This algebra can be identified with the set of polynomial maps f : C → g using the Lie algebra isomorphism The half-loop Lie algebra has another basis conveniently called the Drinfel'd basis: Proposition 3.1. The half-loop Lie algebra L + admits a Drinfel'd basis generated by elements {x a , J( for any µ, ν ∈ C. In the rank(g) = 1 case the level-2 loop terrific relation 3.3 becomes trivial and for the rank(g) ≥ 2 case the level-3 loop terrific relation 3.4 follows from level-2 loop terrific relation. The isomorphism with the standard loop basis is given by the map A proof is given in section 7.1.
3.2.
Twisted half-loop Lie algebra. Let us extend the involution θ of g to the whole of L + as follows: The twisted half-loop Lie algebra H + ∼ = g[u] θ is a fixed-point subalgebra of L + generated by the elements stable under the action of the (extended) involution θ, namely H + = {x ∈ L + | θ(x) = x}. In physics literature it is ofted referred to as the twisted current algebra.
Consider the symmetric space basis of g. We write the half-loop Lie algebra L + in terms of the elements {X for all k, l ≥ 0 and α = λ. The action of θ on this basis is given by θ(X The twisted half-loop Lie algebra can be defined in terms of the Drinfel'd basis: Proposition 3.2. Let rank(g) ≥ 2. Then the twisted half-loop Lie algebra admits a Drinfel'd basis generated by elements {X α , B(Y p )} satisfying The isomorphism with the standard twisted half-loop basis is given by the map A proof is given in section 7.1. Note that in the contrast to L + , the twisted algebra H + for rank(g) ≥ 2 has level-2 and level-3 higher-order defining relations, which we call horrific relations. This is due to the fact that even and odd levels of H + are not equivalent. The rank(g) = 1 case is exceptional. The Drinfel'd presentation in this case has level-4 relation instead (see [BelCra 12], section 4.2).
Proposition 3.3. Let rank(g) ≥ 2. Then the even half-loop Lie algebra admits a Drinfel'd basis generated by elements for any µ, ν ∈ C. The isomorphism with the standard half-loop basis is given by the map i . The proof follows directly by Proposition 3.1, since g[u 2 ] ∼ = g[u] as Lie algebra.
4. Lie bi-algebras and bi-ideals 4.1. Lie bi-algebra structure of a half-loop Lie algebra. A Lie bi-algebra structure on L + is a skewsymmetric linear map δ : L + → L + ⊗ L + , the cocommutator, such that δ * is a Lie bracket and δ is a 1-cocycle, δ([x, y]) = x.δ(y) − y.δ(x), where dot denotes the adjoint action on L + ⊗ L + . The cocommutator is given for the elements in the Drinfel'd basis of L + by This cocommutator can be constructed from the Manin triple Lie algebra (L, L + , L − ), with L = g((u −1 )) the loop algebra generated by elements {x (n) } with x ∈ g, n ∈ Z and defining relations (3.1) (but with n, m ∈ Z), L + = g . A Manin triple is a triple of Lie bi-algebras (g,g + ,g − ) together with a nondegenerate symmetric bilinear form ( , )g ong invariant under the adjoint action ofg: •g + andg − are Lie subalgebras ofg; •g =g + ⊕g − as a vector space; • ( , )g is isotopic forg ± (i.e. (g ± ,g ± ) L = 0); Here the bilinear form is given by (x (k) , y (l) ) L = (x, y) g δ k+l+1,0 .
Remark 4.1. If (g,g + ,g − ) is a Manin triple for dim(g + ) = ∞, then (g + ) * ∼ =ḡ−, whereḡ − is the completion ofg − . However in our case (L + ) * ∼ = L − , as it is easy to see: Here in the second equality we have used the identity where V i denotes a finite dimensional vector space; an equivalent identity is used in the last equality.
The cocomutator is obtained using the duality between L + and L − . Recall that δ * : L − ⊗ L − → L − is the Lie bracket of L − . We can deduce the cocommutator δ of L + from the duality relation The cocommutator of the level zero generators x Let us consider an ansatz δ(J(x a )) = v lk a x k ⊗ x l for some v lk a . Then we must have v lk a α blk = c g η ab . It follows from (2.2) that δ(J(x a )) = α lk a x k ⊗ x l = [x a ⊗ 1, Ω g ]. 4.2. Lie bi-ideal structure of twisted half-loop algebras. The Lie bi-ideal structure of twisted halfloop algebras is constructed by employing the anti-invariant Manin triple twist. Here we will consider the left Lie bi-ideal structure. The right Lie bi-ideal is obtained in a similar way.
Definition 4.2 ([BelCra 12]). The anti-invariant Manin triple twist φ of (L, L + , L − ) is an automorphism of L satisfying: • φ is an involution; for all k ∈ Z). The twist φ gives the symmetric pair decomposition of the Manin triple (L, L + , L − ): Then we must have (H ± ) * ∼ = M ∓ . This is easy to check: This decomposition of the Manin triple allows us to construct the Lie bi-ideal structure on H + .
for all x ∈ H − and y ∈ M − .
We are now ready to define the Lie bi-ideal structure for (L + , H + ).
Proposition 4.1. The Lie bi-ideal structure of (L + , H + ), τ : H + → M + ⊗ H + is given by Proof. The construction of the Lie bi-ideal structure τ from the anti-invariant Manin triple twist is similar to the one of the Lie bi-algebra structure from the Manin triple. We have to consider the duality relation Consider the case θ = id. For the level zero generators X This follows by similar arguments as for the level zero generators of the half-loop Lie algebra. Hence we have τ (X α ) = 0.
For the level one generators Y ] and the duality relation we obtain
Let us consider an ansatz
The Lie bi-ideal structure for the θ = id case follows from the pairing (G(x a ), x For completeness we give a remark which was stated by one of the authors in [BelCra 12].
Remark 4.2. The notion of left (right) Lie bi-ideal is related to the notion of co-isotropic subalgebra h of a Lie bi-algebra (g, δ). It is a Lie subalgebra which is also a Lie coideal, meaning that
Quantization
To obtain a quantization of Lie bi-ideal we need to introduce some additional algebraic structures. Recall the definition of a bi-algebra and of a Hopf algebra. A bi-algebra is a quintuple (A, µ, ı, ∆, ε) such that (A, µ, ı) is an algebra and (A, ∆, ε) is a coalgebra; here A is a C-module, µ : A ⊗ A → A is the multiplication, ∆ : A → A ⊗ A is the comultiplication (coproduct), ı : C → A is the unit and ε : A → C is the counit. A Hopf algebra is a bi-algebra with an antipode S : A → A, an antiautomorphism of algebra. (1) B is a sumbodule of A, i.e. there exists an injective homomorphism ϕ : B → A; (2) coaction is a coideal map : B → A ⊗ B and is a homomorphism of modules; (3) coalgebra and coideal structures are compatible with each other, i.e. the following identities hold: (4) ǫ : B → C is the counit.
where m is the multiplication and i is the unit, is an algebra; (2) B is a subalgebra of A, i.e. there exists an injective homomorphism ϕ : B → A; (3) the triple (B, , ǫ) is a coideal of (A, ∆, ε).
The relation (5.1) is usually referred to as the coideal coassociativity of B. We will refer to (5.2) as the coideal coinvariance. We will refer to the map ϕ as to the natural embedding ϕ : B ֒→ A.
The next definition, a quantization of a Lie bi-algebrag, is due to Drinfel'd [Dri 87]: Definition 5.3. Let (g, δ) be a Lie bi-algebra. We say that a quantized universal enveloping algebra (U (g), ∆ ) is a quantization of (g, δ), or that (g, δ) is the quasi-classical limit of (U (g), ∆ ), if it is a topologically free C[[ ]] module and: (1) U (g) / U (g) is isomorphic to U(g) as a Hopf algebra; (2) for any x ∈g and any X ∈ U (g) equal to x (mod ) one has Note that (U (g), ∆ ) is a topological Hopf algebra over C[[ ]] and is a topologically free C[[ ]] module. Drinfel'd also noted that for a given Lie-bialgebra (g, δ), there exists a unique extension of the map δ :g → g ⊗g to δ : U(g) → U(g) ⊗ U(g) which turns U(g) into a co-Poisson-Hopf algebra. The converse is also true. In such a way (U (g), ∆ ) can be viewed as a quantization of (U(g), δ).
In the remaining part of this section we will consider a quantization of symmetric pairs of half-loop Lie algebras (g,g θ ) = (L + , H + ). We will recall the coproduct of the Yangian Y(g) = U (L + ) which follows from the Lie bi-algebra structure on L + . Then we will construct the coaction of the twisted Yangian Y(g, g θ ) tw = U (L + , H + ) which will follow from the Lie bi-ideal structure on H + and the coideal compatibility relations (5.1) and (5.2). And finally, we will recall the defining relations of the Drinfel'd Yangian and give the main results of this paper, the defining relations of the twisted Yangians in Drinfel'd basis.
5.1. Coproduct on Y(g). The coproduct is given by This is the simplest solution of the quantization condition satisfying the coassociativity property The grading on Y(g) is given by As in the previous section, we need to consider the cases θ = id and θ = id separately.
Proposition 5.2. Let θ = id. Then the coideal subalgebra structure is given by The grading on Y(g, g) tw is given by deg(x a ) = 0, deg( ) = 1 and deg(G(x a )) = 2.
The proofs of Propositions 5.1 and 5.2 are stated in section 7.2. The map (5.9) is the MacKay twisted Yangian formula [DMS 01]. The next remark is due to Lemma 7.1: Remark 5.1. It will be convenient to write the coaction (5.11) in the following way
Yangians and twisted Yangians in Drinfel'd basis.
For any elements x i1 , x i2 , . . . , x im of any associative algebra over C, set where the sum is over all permutations π of {i 1 , i 2 , . . . , i m } and Theorem 5.1. Let g be a finite dimensional complex simple Lie algebra. Fix a (non-zero) invariant bilinear form on g and a basis {x a }. There is, up to isomorphism, a unique homogeneous quantization Y(g) := U (g[u]) of (g[u], δ). It is topologically generated by elements x a , J (x a ) with the defining relations: where β ijk abc = α il a α jm b α kn c α lmn , γ ijk abcd = α e cd β ijk abe + α e ab β ijk cde , (5.24) for all x a ∈ g and λ, µ ∈ C. The coproduct and grading is given by (5.3), (5.4) and (5.6), the antipode is The counit is given by ε (x a ) = ε (J (x a )) = 0.
An outline of a proof can be found in chapter 12 of [ChaPre 94]. Let us make a remark on the Drinfel'd terrific relations (5.22) and (5.23), which are deformations of the relations (3.3) and (3.4), respectively. The right-hand sides (rhs) of the terrific relations are such that ∆ : Y(g) → Y(g) ⊗ Y(g) is a homomorphism of algebras. Choose any total ordering on the elements x a , J (x b ). Then it is easy to see that the basis of Y(g) is spanned by the totally symmetric polynomials {x a1 , . . . , x am , J (x b1 ), . . . , J (x bn )} with m + n ≥ 1, m, n ≥ 0, and ordering a i · · · a m , b i · · · b n . Moreover, the defining relations must be even in . Indeed, consider the coaction on the left-hand side (lhs) of (5.22). The linear terms in vanish due to the Jacobi identity. What remains are the 2 -order terms cubic and totally symmetric in x a . Hence the rhs of the terrific relation must be of the form 2 A ijk abc {x i , x j , x k } for some set of coefficients A ijk abc ∈ C. By comparing the terms on the both sides of the equation and using the Jacobi identity one finds A ijk abc = β ijk abc . The level three terrific relation (5.23) is obtained in a similar way. 2 Theorem 5.2. Let (g, g θ ) be a symmetric pair decomposition of a finite dimensional simple complex Lie algebra g of rank(g) ≥ 2 with respect to the involution θ, such that g θ is the positive eigenspace of θ. Let {X α , Y p } be the symmetric space basis of g with respect to θ. There is, up to isomorphism, a unique homogeneous quantization Y(g, g θ ) tw := U (g [u], g[u] θ ) of (g [u], g[u] θ , τ ). It is topologically generated by elements X α , B(Y p ) with the defining relations: for all X α , Y p ∈ g and a, b ∈ C. The coaction and grading is as in Proposition 5.1. The counit is ǫ (X α ) = ǫ (B(Y p )) = 0 for all non-central X α . In the case when g θ has a non-trivial centre k generated by X z , then ǫ (X z ) = c with c ∈ C.
In the case when h has a central element, the one dimensional representation of h has a free parameter c ∈ C. This parameter corresponds to the free boundary parameter of a quantum integrable model with a twisted Yangian as the underlying symmetry algebra. For Lie algebras of type A, this parameter also appears in the solutions of the boundary intertwining equation leading to a one-parameter family of the boundary S-matrices satisfying the reflection equation [AACDFR 04].
Theorem 5.3. Let g be a finite dimensional simple complex Lie algebra of rank(g) ≥ 2. Let {x i } be a basis of g. Fix a (non-zero) invariant bilinear form on g and a basis {x i }. There is, up to isomorphism, a unique homogeneous quantization Y(g, g) tw := U (g[u], g[u 2 ]) of (g[u], g[u 2 ], τ ). It is topologically generated by elements x i , G(x i ) with the defining relations: for all x a ∈ g and λ, µ ∈ C. The coaction and grading is as in Proposition 5.2. The co-unit is ǫ (x i ) = ǫ (G(x i )) = 0.
Theorems 5.2 and 5.3 can be proven using essentially the same strategy outlined in chapter 12 of [ChaPre 94]. The complicated part is to obtain the horrific relations (5.27), (5.28) and (5.33), which are quantizations of (3.8), (3.9) and (3.11), respectively. A proof of the first two horrific relations is given in the first part of section 7.3. Proving the third horrific relation is substantially more difficult. We have given an outline of a proof in the second part of section 7.3. We believe that coefficients of the horrific relation (5.33) could be further reduced to a more elegant and compact form. We have succeeded to find such a form for the sl 3 Lie algebra: Remark 5.2. For g = sl 3 the coefficients of the horrific relation (5.33) get simplified to The Yangian Y(g) has a one-parameter group of automorphisms τ c , c ∈ C, given by τ c (x a ) = x a and τ c (J (x a )) = J (x a ) + c x a , which is compatible with both algebra and Hopf algebra structure. An analogue of this automorphism for the twisted Yangian Y(g, g θ ) tw is a one-parameter group of automorphism of embeddings (5.9) given by τ c (ϕ(X α )) = X α and τ c (ϕ There is no analogue of such an automorphism for the twisted Yangian Y(g, g) tw , since it is not compatible with the relation (5.33).
Coideal subalgebras of the Yangian Y(sl 3 )
In this section we present three coideal subalgebras Y(g, h) tw of Y(sl 3 ), with h = so 3 , h = gl 2 and h = sl 3 (θ = id case). We will denote generators of the first two algebras by symbols h, e, f, k and H, E, F. We will start by recalling the definition of the Lie algebra sl 3 in the Serre-Chevalley basis and in the Cartan-Chevalley basis.
6.1. The sl 3 Lie algebra. Lie algebra sl 3 in the Serre-Chevalley basis is generated by {e i , f i , h i | i = 1, 2} subject to the defining relations The quadratic Casimir operator of U(sl 3 ) is given by (c g = 6) Definition 6.1. Let Y(sl 3 ) denote the associative unital algebra with sixteen generators e i , f i , h j , J (e i ), J (f i ), J (h j ) with i = 1, 2, 3, j = 1, 2 and the defining relations (6.2) and Definition 6.2. The Hopf algebra structure on Y(sl 3 ) is given by 6.3. Orthogonal twisted Yangian Y(sl 3 , so 3 ) tw . Let the involution θ be given by The action of θ on the rest of the algebra elements is deduced by the constraint θ 2 = id. The symmetric space basis for sl 3 is given by g θ = {h = h 1 + h 2 , e = e 1 − e 2 , f = f 1 − f 2 } and m = {h 1 − h 2 , e 1 + e 2 , f 1 + f 2 , e 3 , f 3 }.
Definition 6.6. The algebra Y(sl 3 , gl 2 ) tw admits a unique left co-action given by for all x ∈ {h, e, f, k} and The co-unit is given by for all x ∈ {e, f, h} and Y ∈ {E i , F i } (i = 1, 2).
Definition 6.7. Let Y(sl 3 , sl 3 ) tw denote the associative unital algebra with sixteen generators e i , f i , h j , G(e i ), G(f i ), G(h j ) with i = 1, 2, 3, j = 1, 2, obeying the standard sl 3 Lie algebra relations of the Cartan-Chevalley basis and the standard level-2 Lie relations for any x, y ∈ {e i , f i , h j }, a, b ∈ C, and the following level-4 horrific relation Definition 6.8. The algebra Y(sl 3 , sl 3 ) tw admits a left co-action given by a . The cases n + m = 0 and n + m = 1 are given by (3.2). The case n + m = 2 follows from the level-2 Drinfel'd terrific relation (3.3). Indeed, we can rewrite (7.2) as ]) = 0. For n = m = 1 it is equivalent to the level-2 Drinfel'd terrific relation and for n = m this equality follows from definition (7.1) and the Jacobi identity (2.1). Let us recall that for the rank(g) = 1 case, this level-2 terrific relation is trivial and one has to consider the level-3 terrific relation (5.23) instead, which can be constructed in a similar way.
Define level-n generators by α are the level-0 and level-1 Drinfel'd generators, respectively. Then (7.2) is equivalent to for some N αβ ∈ C × satisfying N αβ = −N βα . We will prove (7.3) by induction. The base of induction is given by the cases with 0 ≤ m + n ≤ 2. Now suppose (7.3) is true for some n + m = k ≥ 3. The action of adh
Acting with adh
(1) α and using (7.6) we get To obtain the remaining relations we act with adh (1) α on the third relation in (7.3). This gives α+β .
The level-2 generators are defined by . They are required to satisfy the following identities: The first identity is the Lie algebra relation for the level-2 generators. It follows by a direct calculation: ] by (7.14) (2) (X γ ), by the mixed Jacobi identity w pr γ f γ αβ + w qp β g r αq + w qr β g p qα = 0 and by (7.14). The second identity ensures that there are exactly h = dim(g θ ) level-2 generators. It gives the level-2 horrific relation (3.8): Now consider the level-3 generator defined by We require B (3) (Y p ) to satisfy the following identities: The first identity is the Lie algebra relation for the level-3 generators. The second identity ensures that there are exactly m = dim(m) level-3 generators. Let us show the first identity. We will need the following mixed Jacobi relation (7.18) g βq p f µ αβ − g µr p g q rα − g µq r g r αp = 0.
For the second identity in (7.17) we have which combined with (7.19) gives the level-3 horrific relation (3.9). For completeness, let us recall that for the rank(g) = 1 case, the level-2 and level-3 horrific relation are trivial and one has to consider a level-4 horrific relation instead. This can be shown in a similar way.
We need to show that (7.11) and (7.13) hold for all n ≥ 1, and that (7.12) holds for all n ≥ 2 provided they hold for n = 0 and n = 1, respectively. We will use the symmetric space basis of the Cartan decomposition of g and an induction hypothesis to complete the proof. For simplicity, we will restrict to a special case of a symmetric pair with the Cartan decomposition given by h = l h ⊕ (⊕ α h α ) and m = (⊕ µ m µ ) with α ∈ ∆ h and µ ∈ ∆ m , the roots of h and m, respectively. This case corresponds to the symmetric pair of type AIII.
We will use the lower case letters to denote the generators of even levels and the upper case letters for the odd levels. We define level-(2k + 2) generators {h (1) µ = E µ are the level-0 and level-1 Drinfel'd basis generators. Let α, β ∈ ∆ h , µ, ν, λ ∈ ∆ m , and γ, δ ∈ ∆ denote any root. The relations (7.11), (7.12) and (7.13) in this basis are equivalent to Suppose that the relations above hold for all levels up to 2k + 1 ≥ 3, the marginal case being the base of induction consisting of (7.23), (7.24) and (7.25) with m = n = 0, which give level-0, level-1 and level-2 relations, respectively, and (7.24) with m = 0 and n = 1 giving level-3 relations.
We have demonstrated that all level-(2k + 2) relations hold provided all the defining relation of level no greater than (2k + 1) hold. It remains to show that level-(2k + 3) relations in (7.24) also hold. Consider (7.23) with 2n + 2m + 2 = 2k + 2. Acting with −adE (1) µ on the first relation we find Set δ = γ and use the induction hypothesis and definition (7.22). Then Next, act with −adE (1) µ on the second relation in (7.23). We obtain µ+β , which combined with the initial expression gives the required relation for arbitrary 2n + 2m = 2k + 3, namely µ+β . To end this section we remark that we would welcome a proof of the Drinfel'd basis for the half-loop and twisted half-loop Lie algebras, which would not be based on the Cartan decomposition of the underlying Lie algebra.
Proofs of coactions.
7.2.1. Proof of Proposition 5.1. The Lie bi-ideal structure on H + defines the coaction up to the first order in , For the level zero generators of H + the Lie bi-ideal structure is trivial and the minimal form of the coaction is given by The coideal compatibility relations (5.1) and (5.2), and the classical limit requirement ϕ(X α )| →0 = X α implies that ϕ(X α ) = X α is the natural inclusion ϕ : X α ∈ Y(g, h) tw → X α ∈ Y(g).
For the level one generators of H + the Lie bi-ideal structure is non trivial. The simplest coaction is given by As previously, the coaction must satisfy relations (5.1) and (5.2), and in the classical limit we must obtain ϕ(B(Y p ))| →0 = B(Y p ). By (5.2) it follows Consider an ansatz ϕ(B(Y p )) = J (Y p ) + F (0) p with some level zero element F We choose F . It remains to check the coideal coassociativity (5.1) and the homomorphism property ϕ( In what follows we will need the following auxiliary Lemma: Lemma 7.1. In a simple Lie algebra g the following identities hold: i α cs j α br k (α a rs x a ⊗ {x b , x c } + α a bc {x s , x r } ⊗ x a ) = α jk i α cs j α br k α a rs (x a ⊗ {x b , x c } + {x c , x b } ⊗ x a ) by ren. b, c ↔ r, s. The third identity is obtained using the following auxiliary identities 2 α jk i α cr j α bs k α a sr = α jk i α cr j (α bs k α a sr + α as k α b sr ) + 1 2 c g α jc i α ab j , (7.41) 2 α jk i α cr j α bs k α a sr = α jk i α bs k (α cr j α a sr + α ar j α c sr ) + 1 2 c g α jb i α ac j . (7.42) The first auxiliary identity follows by multiple application of the Jacoby identity: α jk i α cr j (α bs k α a sr − α as k α b sr ) = α jk i α cr j α s rk α ab s = 1 2 α jc i α kr j α s rk α ab s = 1 2 c g α jc i α ab j . The second auxiliary identity follows by the b ↔ c symmetry and renaming j, s ↔ k, r.
To complete the proof we need to calculate all elements in (H (4) abc ) as we did in the proof of Theorem 5.2 above. Here we will not attempt to reach this goal. To give all the details of the proof would take more pages than the present paper contains itself, and thus we refrain from making such an attempt. An important question is whether the coefficients Ψ ijk abc , Φ ijk abc andΦ ijklm abc can be written in an elegant and compact form. We have not succeeded in accomplishing this, so we will leave it as an open question for further study. | 2017-03-01T05:36:13.000Z | 2014-01-09T00:00:00.000 | {
"year": 2014,
"sha1": "c1bb84bfbd836aac4a8b3ce489b238f24d175a5e",
"oa_license": "CCBYSA",
"oa_url": "http://www.emis.de/journals/SIGMA/2017/011/sigma17-011.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "0901e0a1efad1696100ae9b28b1755e68020b65b",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
14027402 | pes2o/s2orc | v3-fos-license | The BCD of response time analysis in experimental economics
For decisions in the wild, time is of the essence. Available decision time is often cut short through natural or artificial constraints, or is impinged upon by the opportunity cost of time. Experimental economists have only recently begun to conduct experiments with time constraints and to analyze response time (RT) data, in contrast to experimental psychologists. RT analysis has proven valuable for the identification of individual and strategic decision processes including identification of social preferences in the latter case, model comparison/selection, and the investigation of heuristics that combine speed and performance by exploiting environmental regularities. Here we focus on the benefits, challenges, and desiderata of RT analysis in strategic decision making. We argue that unlocking the potential of RT analysis requires the adoption of process-based models instead of outcome-based models, and discuss how RT in the wild can be captured by time-constrained experiments in the lab. We conclude that RT analysis holds considerable potential for experimental economics, deserves greater attention as a methodological tool, and promises important insights on strategic decision making in naturally occurring environments.
Introduction
It seems widely agreed that decisions ''in the wild'' (Camerer 2000, p. 148) are often afflicted by time pressure, typically to the decision maker's detriment. Addressing these effects of time pressure, the common adage ''to sleep on it'', for example, implies that delaying a decision can improve its quality by allowing more time to reflect on it cognitively and emotionally. In fact, legislators have acknowledged the influence of the interaction of time and emotions on decisions: Mandatory ''cooling-off periods'' are used to temper the effects of sales tactics such as time pressure on consumer purchases by allowing consumers to renege on impulse purchases (Rekaiti and Van den Bergh 2000). Similarly, ''cooling-off periods'' between the filing and the issuance of a divorce decree have been found to reduce the divorce rate (Lee 2013). When time is scarce, decision makers have less time to process information pertaining to the specific case at hand, and instead may rely on their priors, which may be driven by stereotypes. Under time pressure, stereotypes about defendants are more likely to be activated and can affect judgments of guilt and proposed punishment (van Knippenberg et al. 1999). Similarly, judgments under time pressure about a suspect holding a weapon or not are more likely to exhibit racial biases (Payne 2006). Assessments of whether acute medical attention is required can also be shaped by time pressure (Thompson et al. 2008). Other examples of environments that operate under time pressure include auctions, bargaining and negotiations, urgent medical care, law enforcement, social settings with coordination issues, and human conflict; moreover, all decisions have an implicit non-zero opportunity cost of time. Beyond the time taken to deliberate, collecting and processing information efficiently is also time-consuming. Yet, the temporal dimension of decision making has not featured prominently in economists' analyses of behavior. We argue below that it often matters, both for individual and strategic decision making (henceforth, individual and strategic DM). We will argue that the analysis of (possibly forced) response time (RT) data can significantly complement standard behavioral analyses of decision making; of course, it is no panacea and we will highlight challenges and pitfalls along the way.
The scientific measurement of the timing of mental processes (mental chronometry), starting with Donders (1868), has a long tradition in the cognitive psychology literature-see Jensen (2006), Luce (2004) and Svenson and Maule (1993) for contemporary discussions. While psychologists have long acknowledged the benefits of jointly analyzing choice and possibly forced RT data, even behavioral economists have until recently paid little attention to RT. Many of the most prominent behavioral models remain agnostic about RT, e.g., Prospect Theory (Kahneman and Tversky 1979;Tversky and Kahneman 1992), models of fairness (Bolton and Ockenfels 2000;Charness and Rabin 2002;Fehr and Schmidt 1999), and temporal discounting models (Laibson 1997).
Early work in economics can be classified into two types of RT applications. The first type of application emphasizes the usefulness of RT analysis for DM without any time constraints (Rubinstein 2004(Rubinstein , 2006(Rubinstein , 2007(Rubinstein , 2008(Rubinstein , 2013(Rubinstein , 2016, which we refer to as endogenous RT. Decision makers are free to decide how long to deliberate on a problem; RT is shaped by the opportunity cost of time and the magnitude of the task incentives. Consequently, rational decision makers must choose a point on a speed-performance efficiency frontier. For economists, performance will typically be measured as utility. This is consistent with an unconstrained utility maximization problem only when the opportunity cost of time is very low relative to the incentives, thereby excluding a significant proportion of real-world decision environments. Researchers working with endogenous RT typically measure the time taken for a subject to reach a final (committed) decision-we refer to this as single RT. However, subjects' provisional choices may be elicited throughout the deliberation period at various times (Agranov et al. 2015;Caplin et al. 2011)-we refer to this as multiple RT. Multiple RT captures the evolution of within-subject decision processes over time, yielding more useful information about the dynamic underpinnings of decision making. In most experiments, payoffs are typically independent of RT (non-incentivized). Another possibility is to use incentivized tasks that introduce a benefit to answering more quickly, for example, by having a time-based payoff reward or penalty (e.g., Kocher and Sutter 2006). 1 The second type of application emphasizes the examination of DM under time constraints (Kocher et al. 2013;Kocher and Sutter 2006;Sutter et al. 2003), which we refer to as exogenous RT. The most common type of time constraint is time pressure, i.e., limited time to make a decision. Time delay, i.e., the imposition of a minimum amount of time, can be also found in some studies, usually those interested in the effects of emotions on decision making (e.g., Grimm and Mengel 2011). Decision makers are increasingly being called upon to multi-task, i.e., handle many tasks and decisions almost simultaneously or handle a fixed number of tasks within a time constraint. Measuring the time allocated to individual tasks is crucial to understanding how decision makers prioritize and allocate their attention. One technique of implementing time pressure in the lab is to impose a time limit per set of tasks, instead of per task, as is typically done. This route is taken by Gabaix et al. (2006), who find qualitative evidence of efficient time allocation, i.e., subjects allocated more time to tasks that were more difficult. In the majority of studies, treatments within an experiment typically compare an endogenous RT treatment with other exogenous RT constraints, i.e., RT is the only variable that is manipulated across treatments. However, if other variables are also simultaneously manipulated across treatments, it is possible that the RT manipulations will interact to different degrees with the other variables. Furthermore, knowledge that an opponent is also under time pressure could induce a change in beliefs about how the opponent will behave. These two examples highlight the importance of a thorough understanding of RT constraints and a well-designed experiment that minimizes the impact of such issues-we return to the issue of identification later in Sect. 5.1.
Endogenous and exogenous RT analyses differ in the benefits that they offer. The former's usefulness lies primarily in revealing additional information about a decision maker's underlying cognitive processes or preferences (aiding in the classification of decision-maker types) and the effects of deliberation costs on behavior. The latter's usefulness lies primarily in exploring the robustness of existing models to different time constraints, i.e., verifying the external validity of models and the degree to which they generalize effectively to different temporal environments. We will present evidence that behavior on balance is strongly conditional on the time available to make a decision. In fact even the perception of time pressure, when none exists, can significantly affect behavior (Benson 1993;DeDonno and Demaree 2008;Maule et al. 2000).
Experimental designs manipulating realistic time constraints in the lab are a useful tool to advance our understanding of behavior and adaptation to time constraints. Exogenous RT analysis has already led to important insights within the context of two different approaches to modeling decision making. The first approach examines how decision processes change under time pressure. Historically, this has been the focus of research in cognitive psychology that was driven by the belief that cognition and decision making rules are shaped by the environment (Gigerenzer 1988;Gigerenzer et al. 1999Gigerenzer et al. , 2011Gigerenzer and Selten 2002;Hogarth and Karelaia 2005Karelaia and Hogarth 2008;Payne et al. 1988Payne et al. , 1993. By exploiting statistical characteristics of the environment, such ecologically rational heuristics (or decision rules) are particularly robust, even outperforming more complex decision rules in niche environments. This raises the following question. How are the appropriate heuristics chosen for environments with different temporal characteristics? A consensus has emerged from this literature that time pressure leads to systematic shifts in information search and, ultimately, selected decision rules (Payne et al. 1988(Payne et al. , 1996Rieskamp and Hoffrage 2008). Subjects adapt to time pressure by: (a) acquiring less information, (b) accelerating information acquisition, and (c) shifting from alternative-towards attribute-based processing, i.e., towards a selective evaluation of a subset of the choice characteristics. These insights from cognitive psychology emerged from individual DM; in we present evidence that similar insights can be had for strategic DM. Imposing time pressure in one-shot 3 Â 3 normal-form games led to changes in information search (both acceleration and more selective information acquisition) that also have been documented for individual DM as well as the increased use of simpler decision rules such as Level-1 reasoning (Costa-Gomes et al. 2001;Stahl and Wilson 1995). 2 The second approach examines how preferences may depend on time constraints. This approach contributes to the discussion on the (in)stability or contextdependence of preferences by adding the temporal dimension to the debate (Friedman et al. 2014). Specifically, we will review evidence that a wide range of preferences are moderated by time constraints. For example, risk preferences are affected by time pressure. Risk seeking (or gain seeking relative to an aspiration level) can be stronger under time pressure in the gain or mixed domains, although this may depend on framing (e.g., Kocher et al. 2013;Saqib and Chan 2015;Young et al. 2012). Furthermore, RT analysis has led to a burgeoning inter-disciplinary literature and debate about the relationship between social preferences and RT (both endogenous and exogenous). A debate is in progress about whether pro-social behavior is intuitive, and whether people are more likely to behave more selfishly under time pressure (e.g., Piovesan and Wengström 2009;Rand et al. 2012Tinghög et al. 2013;Verkoeijen and Bouwmeester 2014). This is one of the most exciting topics that RT analysis has motivated, as the nature of human cooperation is central to our understanding of the functioning of society-we will discuss this debate in detail in the next section.
The analysis of endogenous RT-while not as common-has also produced some interesting findings in experimental economics. Recall, that endogenous RT analysis is primarily a methodological tool that allows researchers to learn more about individuals' decision processes and preferences, which tend to be quite heterogeneous. Consequently, researchers are often interested in the classification of decision-makers into a set of types based, say, on social preferences and risk preferences. Classification is typically accomplished solely on the basis of choices (through revealed preferences), but response time can also be used for this purpose. Numerous studies have determined that RT can be used to predict behavior out-ofsample or to classify subjects into types, often more efficiently than using other classical variables such as the level of risk aversion (Rubinstein 2013) and even normative solutions (Schotter and Trevino 2014b). Chabris et al. (2008Chabris et al. ( , 2009 found that intertemporal discount parameters estimated using only RT data were almost as predictive as those estimated traditionally from choice data. Rubinstein (2016) proposes classifying players within a spectrum called the contemplative index. The degree of contemplation or deliberation that a person exhibits seems to be a relatively stable personality trait, which can be used to predict behavior even across games.
While experimental economists have begun tapping into the potential of exogenous RT analysis, they have not embraced endogenous RT analysis to the same degree. It is our belief that there still exists significant potential for the latter; however, similarly to the endogenous RT work in cognitive psychology, unleashing the full potential is aided by the use of procedural (process-based), rather than substantive (outcome-based), models of behavior. In contrast to substantive models, procedural models stipulate how decisions are made (specifying the mechanisms and processes) in addition to the resulting decisions. Procedural models that jointly predict choice and RT are crucial for predicting how adaptation occurs in response to RT constraints-the class of sequential-sampling models discussed in Sect. 3.6 is one example. In mathematical psychology, model comparisons of procedural models have a tradition of using RT predictions (not just choices) to falsify models-see for example Marewski and Melhorn (2011). Our literature review 3 revealed that the existing RT studies in economics exhibit a lack of formal procedural modeling and are most often viewed through the lens of dual-system models (Kahneman 2011). These models contrast a faster, more instinctive and emotional System 1 with a slower, more deliberative System 2-under time pressure System 1 is more likely to be activated. Many studies on social preferences are devoted to reverse inference based on these two systems, i.e., types of decisions that are made faster are categorized as intuitive. This may be a problematic identification strategy.
We have briefly presented what we see as examples of how RT analysis has already led to important insights in experimental economics. The case for collecting RT data in economic experiments seems strong, as RT is an additional variable available at virtually zero cost for all computerized experiments. If no time constraints are imposed, the collection of RT data is not noticeable by experimental subjects and neither primes nor otherwise affects their behavior. While there is little cost associated with collecting the data, the benefits depend on the type of study. Response time analysis seems particularly useful in the cases that we have outlined above where time constraints may mediate decision-makers' preferences (e.g., risk or social preferences) or processes. Also, in information-rich environments where information search or retrieval may be costly, the imposition of a time constraint or high opportunity cost of time is likely to have an amplified effect on behavior. In empirical model comparison studies, 4 where it is practically difficult to collect enough choice data on a large enough set of tasks, RT can be used to more effectively discriminate between procedural models by increasing the falsifiability of models (they may be rejected either for poor choice predictions or poor RT predictions). Finally, even basic response time analysis can be useful in virtually any experimental study. Extremely quick responses or very slow responses are often symptomatic of subjects that are not engaging with the experiment seriously. The influence of such outliers on the conclusions drawn from experiments can be extremely problematic as we will show later on.
Our manuscript is meant to assess the state of the art, to stimulate the discussion on RT analysis, and to bring about a critical re-evaluation of the relevance of the temporal dimension in decision making. In complex strategic decision making, adaptive behavior that makes efficient use of less information, less complex calculations (e.g., such as higher-order belief formation about an opponent's play), and emergent social norms, seems even more important than for individual DM . Inspired by the results in cognitive psychology, we envision a research agenda for strategic DM that parallels that of individual DM. Whilst we emphasize the potential contribution of RT to strategic DM, we note that most of our arguments are relevant to individual DM.
We envision this manuscript as a critical review of RT analysis that is accessible to readers with little prior knowledge of the topic, for instance an advanced graduate student who wants to jump-start her/his understanding of the issue. Since the paper is quite long, we have used a modular structure so that readers with prior experience may selectively choose the sub-topics they are more interested in. An extended version of the paper including some more technical arguments can be found in our working paper (Spiliopoulos and Ortmann 2016), which we first posted in 2013 and have revised contemporaneously.
The present manuscript is organized as follows. We summarize the benefits, challenges, and desiderata (the BCD) of both the experimental implementation of RT experiments and the joint statistical modeling of choice and RT data in Table 1. A literature review of RT studies and summary of the most important findings follows in Sect. 2. In the following section we delve into the multitude of ways to model RT and choices (Sect. 3). We then devote the next three sections to pull together the benefits, challenges, and desiderata of RT analysis in experimental economics. We encourage the reader to preview our summary arguments in Table 1-keeping these arguments in mind before delving into detailed arguments will likely be beneficial. Section 7 concludes our manuscript. A detailed literature review of RT in strategic DM is presented in ''Appendix 1'', including Table 3 taxonomizing all the studies we have found. A framework for relating behavior and decision processes to time constraints for strategic DM is presented in ''Appendix 2''.
A review of the RT literature
There are three waves of RT studies that can be classified according to the types of tasks investigated. The first wave was concerned with judgment tasks, for example involving perceptual acuity or memory retrieval. A second wave emerged first in cognitive psychology and later in economics examining individual DM choice tasks that required valuation processing rather than judgments, e.g., decision making under risk and lottery choices (Dror et al. 1999;Kocher et al. 2013;Wilcox 1993), and multi-alternative and -attribute choice (Rieskamp and Hoffrage 2008;Rieskamp and Otto 2006). The third, and most recent, wave involves the analysis of RT in strategic DM or games-below we focus on this third wave.
We catalogued the existing literature on RT in strategic games by performing multiple searches in Google Scholar (April, 2013) 5 and by sifting through the results of these searches to obtain an initial list of relevant studies. We then identified other studies on RT that were cited in these papers to obtain as complete a list as possible. We have repeated these searches for each revised draft since the original. Unpublished working papers are included because RT studies in strategic DM emerged fairly recently.
A summary of the main characteristics of the literature using RT in strategic DM can be found in our working paper (Spiliopoulos and Ortmann 2016). A more detailed discussion of individual studies is presented in Table 3 in ''Appendix 1''. Out of a total of 52 studies (41 published and 11 unpublished) roughly half of the studies (52%) in our data set do not impose any time constraints and simply measure the (endogenous) RT of decisions. Dual-system models of behavior are the most common (48%), followed by models involving iterated strategic reasoning (15%), and models based on the cost and benefits of cognitive effort (12%) and the effect of emotions (13%). We proceed below by discussing the key findings of the literature for the following themes broached in the introduction: (a) preferences (risk, intertemporal and social), (b) decision processes and emotions, (c) type classification, and (d) the speed-performance profile. Table 2 summarizes the key findings in each of these topics.
Risk preferences
With the exception of an early study using mixed prospects (Ben Zur and Breznitz 1981), the majority of studies find that time pressure tends to increase risk-seeking behavior in the gain domain. Modeling choices between binary lotteries in a Prospect Theory framework, Young et al. (2012) find evidence of increased risk- Evidence that time pressure increases risk taking behavior in the domain of gains, but decreases risk taking behavior in the domain of losses. Framing has been found to mediate these effects, with aspiration levels playing a role.
Intertemporal
Limited evidence that the present-bias is reduced under time pressure, but the longterm discounting factor and utility function curvature remain the same.
Social
No consensus on whether cooperation or pro-social behavior are more intuitive. The debate now centers on methodological critiques based on important mediators and/or confounding variables. An alternative hypothesis with some empirical support is that reciprocity is more intuitive. Another hypothesis is that the higher the cognitive dissonance or conflict the slower the RT-this is consistent with a sequentialsampling account. This implies an inverted-U shaped relationship between RT and cooperation, which could reconcile the conflicting findings in the literature.
Processes
The closer the valuations of competing options are, the longer the (endogenous) time taken to decide. Limited evidence that the existence of aspiration levels that easily discriminate between options leads to a shorter endogenous RT.
Decisions consistent with focal outcomes are associated with shorter RT.
Heuristics
Heuristics are more likely to be used under time pressure-in many cases they involve ignoring some of the available information, particularly in strategic DM.
Emotions
Limited evidence that time delays reduce negative emotions about unfair offers, leading to greater acceptance rates in ultimatum games.
Classification
RT is predictive of behavior (out-of-sample) in a variety of tasks. In many cases, RT is more informative that other variables such as risk preferences or the normative equilibrium solution.
Speedperformance profile
Moderate evidence that, on average, decision quality and payoff performance for individual DM is reduced under time pressure and that there exists a positive relationship between endogenous RT and performance. However, this finding is not robust for strategic DM as it depends crucially on the characteristics of a game. Preliminary findings that time-based incentives do not affect decision quality.
The BCD of response time analysis in experimental economics 391 seeking for gains under time pressure. Similarly, Saqib and Chan (2015) show that time pressure can lead to a reversal of the typical CPT preferences, so that decision makers are risk-seeking over gains and risk-averse over losses. Dror et al. (1999) find that time pressure in a modified blackjack task induced a polarization effectparticipants were more conservative for low levels of risk but were more riskseeking for high levels of risk. Financial decision making, particularly trading, is often performed on a much fast time scale of the order of a few seconds. Nursimulu and Bossaerts (2014) discover that under time pressure, subjects were risk-averse for a 1 s (one second) time constraint but risk-neutral for 3 and 5 s constraints, and positive skewness-averse for a 1 s constraint with their aversion increasing in the 3 and 5 s constraints. Kocher et al. (2013) tell a more cautionary tale about the robustness of time pressure effects on risk attitudes. They conclude that (a) risk seeking in the loss domain changes to risk aversion under time pressure, (b) risk aversion is consistently found with, and without, time pressure in the gain domain, and (c) for mixed lotteries, conditional on the framing of the prospects, subjects can become more loss-averse but also exhibit gain-seeking behavior. These studies involved decisions from description rather than experience. 6 Madan et al. (2015) confirm that time pressure in decisions from experience also leads to an increase in risk-seeking behavior. The evidence that time constraints moderate risk preferences is an important one for real-world decision making. Many high-stakes financial and medical decisions are made under time pressure-if decision makers exhibit more risk-seeking at the time of decision, then this could leave them open to the possibility of larger losses than their (non time-constrained) preferences would dictate after the decision is made.
Social preferences
Tasks involving social preferences dominate the strategic DM literature-ultimatum, public goods and dictator games comprise approximately one-quarter, onefifth, and one-tenth of the studies respectively. We taxonomize the literature according to numerous hypotheses regarding the relationship between RT and behavior. The costly-information-search hypothesis (our own term) claims that response time is positively correlated with pro-social behavior because it requires attending to more information (the payoffs of the other player) and thinking about how to trade-off the various payoff allocations among players. In this tradition, Fischbacher et al. (2013) study mini-ultimatum games and find evidence that RT is increasing in the social properties subjects lexicographically search for, such as fairness, kindness and reciprocity. On the other hand, the social-heuristics hypothesis )-sometimes more broadly construed as the fairness-is-intuitive hypothesis-contends that pro-social behavior is an intuitive or instinctive response in humans, suggesting a negative relationship between RT and pro-social behavior.
The social-heuristics hypothesis is the most tested in the literature as it is compatible with popular dual-system explanations of behavior, which use RT to infer what types of responses are instinctive or deliberative. Rand et al. (2012 find that cooperation is more intuitive than self-interested behavior, as they find a negative relationship between cooperation and both endogenous RT and time pressure. Supporting the hypothesis that cooperative behavior is instinctive, Lotito et al. (2013) conclude that contributions and RT are negatively related in public good games. Furthermore, focusing on responder behavior in the ultimatum game, Halali et al. (2011) find that subjects reject an unfair offer more quickly than they accept it. In dictator games, Cappelen et al. (2016) also conclude that fair behavior is faster.
However, other studies contest this hypothesis on various grounds, primarily methodological. Tinghög et al. (2013) disagree with Rand et al. (2012) on the basis that including some RT outliers in the data, leads to the conclusion that there is no clear relationship between RT and the degree of cooperation. In a public goods game, Verkoeijen and Bouwmeester (2014) manipulate knowledge about other players' contributions, the identity of an opponent (human or computer) under both time pressure and time delay; they do not find a consistent effect of time constraints on the degree of cooperation. In ultimatum games under time pressure, Cappelletti et al. (2011) find that proposers make higher offers whereas Sutter et al. (2003) find that responders are more likely to reject offers. In dictator games, Piovesan and Wengström (2009) conclude that RT is shorter for selfish choices both within-and between-subjects.
One of the most popular alternative hypotheses suggests that RT is increasing in the degree of cognitive dissonance or conflict that a decision maker is facing. Matthey and Regner (2011) induced cognitive dissonance in subjects by allowing them to decide whether they wish to learn about their opponents' payoff function. They discovered that the majority of otherwise ''pro-social'' subjects prefer not to view their opponents' payoff (when possible) using their ignorance as an excuse to act selfishly without harming their self-image. Choosing to ignore information, however, by inducing cognitive dissonance led to shorter RTs. In line with Dana et al. (2007), they conclude that many subjects are mainly interested in managing their self-image or others' perception rather than being pro-social. Jiang (2013) finds that honest choices in cheating games were associated with longer RT, suggesting again that people experience cognitive dissonance or must exert self-control to choose a non-selfish action. Evans et al. (2015) argue that the disparate findings concerning the relationship between cooperation and RT can be reconciled under the assumption that greater decision conflict is associated with longer RTs. Consequently, non-conflicted (extreme) decisions, such as purely selfish or purely cooperative behavior, will typically be faster than conflicted decisions attempting to reconcile both types of incentives. This leads to an inverse-U shaped relationship between RT and cooperation rather than the linear relationship typically postulated in the literature. In a meta-analysis of repeated games, Nishi et al. (2016) conclude that RT is driven not by the distinction between cooperation and self-interest, but instead by the distinction between reciprocal and non-reciprocal behavior. In social environments that are cooperative, cooperation is faster than defection, but in noncooperative environments the reverse holds. The authors put forth an explanation based on cognitive conflict, i.e., non-reciprocal behavior induces cognitive conflict in the decision-maker. Finally, Dyrkacz and Krawczyk (2015) argue that subjects in dictator and other games are more averse to inequality under time pressure.
Another explanation focuses on the possibility that imposing time pressure has unwanted side-effects, in particular it might create confusion about the game leading to more errors. Inference about social preferences can be problematic if these errors are systematically correlated with RT. In a repeated public-goods game, Recalde et al. (2015) find that the shorter RT is, the more likely errors are. Ignoring this relationship would lead researchers to conclude that subjects with shorter RTs had stronger pro-social preferences. Goeschl et al. (2016) also confirm that some subjects are confused in public goods games, and find a heterogeneous effect of time pressure on players. Subjects who were clearly not confused about the game became more selfish under time pressure.
On an important methodological note, there may exist other mediators of RT-that likely differ across studies-which must be rigorously accounted for before inference can be made. Krajbich et al. (2015) critique the use of RT to infer whether strategies are instinctive or deliberative without explicitly accounting for task difficulty and the heterogeneity in subject types, i.e., what is intuitive for each individual may depend on their type. Along these lines, Merkel and Lohse (2016) do not find evidence for the ''fairness is intuitive'' hypothesis after controlling for the subjective difficulty of making a choice. Similarly Myrseth and Wollbrant (2016), in a commentary on Cappelen et al. (2016), also draw attention to the importance of other similar mediators, making reverse-inference problematic, i.e., inferring that faster decisions are more intuitive. They make an important argument regarding the validity of drawing conclusions from absolute versus relative response times. Faster response times in various treatments may still be slow enough to reasonably lie in the domain of deliberate decision processes.
In light of the above studies and the methodological critiques that have been voiced, we believe that firm conclusions should not be drawn yet regarding the relationship between social preferences and RTs. While individual studies often test one or two of these competing hypotheses, nothing precludes the relevance of many hypotheses especially when possible mediators are concerned. For example, assume that pro-social behavior is the more intuitive response. However, if making the prosocial decision involves significant information search costs (about the opponent's payoffs), then it is possible for the total RT to still be longer for pro-social behavior-this depends on the proportion of total RT that is spent on information search. Consequently, accounting for different sub-processes of decision making and the time required to execute these sub-processes could be important (a more extensive discussion of this can be found in ''Appendix 2''). Future studies should aim at controlling rigorously for the possible mediators that have been brought up and competitively testing the various hypothesis within the same framework.
Intertemporal preferences
Lindner and Rose (2016) conclude that while long-run discounting and utility function curvature are quite stable, present-biased preferences are significantly reduced under time pressure. They attribute this finding to a change in the attention of subjects, who were found to focus relatively more on the magnitude, rather than the timing, of payoffs. This is a striking result, as a dual-system account would predict that under time pressure, System I will be activated, leading to more impulsive choices, i.e., an increase in present bias. Again, we note that changes in attention and information search must be examined before reaching conclusions. The lack of studies examining intertemporal preferences and time is notable-further work is necessary to draw robust conclusions.
Decision processes and RT
Sequential-sampling models of decision making (also referred to as informationaccumulation, or drift-diffusion models) have become one of the main paradigms in the mathematical/cognitive psychology literature (Busemeyer 2002;Ratcliff and Smith 2004;Smith 2000;Smith and Ratcliff 2004;Usher and McClelland 2001)see also the extensive discussion in Sect. 3.6. These models assume that cognitive systems are inherently noisy and that the process of arriving at a decision occurs through the accumulation (integration) of noisy samples of evidence until a decision threshold is reached. An important prediction of these models is that the smaller the difference in the values of the options, the longer the RT. Krajbich et al. (2012)see also similar work in Krajbich et al. (2010) and Krajbich and Rangel (2011)extend standard sequential-sampling models to explicitly incorporate the allocation of attention and show that their model can simultaneously account for the triptych of information lookups, choice and RT. Importantly, their model predicts that the time spent on information lookups can influence choice, and that time pressure can lead to noisier valuations, thereby increasing the probability of an error.
Similar conclusions have been reached in the economics literature, albeit derived from different models. Wilcox (1993) finds that subjects exhibit longer RT-a proxy for effort-in a lottery choice task when monetary incentives are higher and the task is complex. Gabaix and Laibson (2005) and Gabaix et al. (2006) also derived the above-mentioned relationship between RT and the difference between option valuations under the assumption that valuations are noisy, but improve the more time is devoted to the task-more details on their modeling can be found in Sect. 3.5. Chabris et al. (2009) tested the optimal allocation of time in decision tasks and reported empirical evidence that the closer the expected utility of the competing options is, the longer the response time. Similarly, Chabris et al. (2008) find that the same principle can be used to recover discount rates from RT data without observing choices.
Another important theme in the literature is the explicit consideration of heuristics (including the use of focal points) versus compensatory, and more complex, decision rules. Guida and Devetag (2013) combine eye-tracking and RT analysis in normal-form games, and find that RT was shorter for games with a clear The BCD of response time analysis in experimental economics 395 focal point, and longer for Nash equilibrium choices. Fischbacher et al. (2013) find that participants' behavior, although heterogeneous, is consistent with the sequential application of three motives in lexicographic fashion. The more motives that are considered, the longer the RT, e.g., a selfish type only examines own payoffs, whereas a pro-social type must also examine others' payoffs. Coricelli et al. (2016), on the other hand, argue that choices between lotteries-whenever possible-may be driven by a simplifying heuristic based on aspiration levels. Such an aspirationbased heuristic can be executed more quickly than the compensatory processes that subjects revert to when this heuristic is not applicable. found that subjects under time pressure shifted to simpler-yet still effectiveheuristics, namely the Level-1 heuristic that simply best responds to the belief that an opponent randomizes with equal probability over his/her action space. Spiliopoulos (2016) examines repeated constant-sum games and finds that RT is dependent on the interaction of two different decision rules: the win-stay/lose-shift heuristic and a more complex pattern-detecting reinforcement learning model. While the former is executed faster than the latter, response time was longer when the two decision rules gave conflicting recommendations regarding which action to choose in the next round.
Research on the impact of emotions is less common. Grimm and Mengel (2011) delay participants' decisions whether to accept/reject an offer in an Ultimatum game for ten minutes. In line with their hypothesis that negative emotions are attenuated as time passes, they find higher acceptance rates after the time delay. Although regret and disappointment have been found to play a role in choices under risk (e.g., Bault et al. 2016;Coricelli et al. 2005;Coricelli and Rustichini 2009), their relationship with RT has not been thoroughly investigated.
Classification
RT is also used to classify subjects into different types, above and beyond possible classifications according to choice behavior. For example, Rubinstein (2007Rubinstein ( , 2013 show that a typology based on RT is more predictive than a typology based on the estimated level of risk aversion. Rubinstein (2016) objectively defines contemplative (instinctive) actions in ten different games as those actions with longer (shorter) RT than the average RT in the game for all actions. The contemplative index of a player derived from subsets of nine of the ten games was positively correlated to the probability of the same player choosing a contemplative action in the tenth game. Devetag et al. (2016) find that the time spent looking up each payoff in 3 Â 3 normal form games is predictive of final choices and the level of strategic reasoning of players. Schotter and Trevino (2014b) use RT in global games to distinguish between two types of players with respect to their learning process. Intuitionists who have a eureka moment when they realize which strategy is effective and learners who acquire an effective strategy through a slower trial-and-error (or inductive) process. A striking result is that RT was more predictive of out-of-sample behavior than the equilibrium solution.
These findings show that RT can be used either alone or in conjunction with choice data to sharpen the classification of subjects into types, thereby increasing our ability to predict the behavior of decision makers across different tasks. This suggests that models including both choice and RT predictions have greater scope and are more generalizable to new situations (Busemeyer and Wang 2000), thereby increasing the predictive content of behavioral models.
Speed-performance profile
Another theme in the literature relates time pressure and the opportunity cost of RT to the quality of decision making, i.e., the speed-performance relationship (discussed at length in Sect. 4.2). Kocher and Sutter (2006) found that time pressure reduced the quality of individual DM, but time-based incentives led to faster decisions without a decrease in decision quality. Arad and Rubinstein (2012) discover that higher average payoffs are achieved by subjects with longer (endogenous) RT. We believe that this theme, which is closely related to the adaptive decision maker hypothesis is the least studied so far in strategic DM. The allocation of time between a set of tasks has been studied by Chabris et al. (2009). Subjects allocated more time to those tasks that were more difficult, defined as tasks where alternative options had more similar valuations. Recall that find that roughly one-third of subjects adapt strategically to time pressure without sacrificing performance (here, payoffs) despite switching to less sophisticated heuristics. There is much work to be done in understanding the speedperformance relationship in strategic DM, and examining whether it is robust to context and tasks. We conjecture it is not, therefore further work will be required to map out how and why this relationship changes-we return to this in more detail in Sect. 4.2.
Summary
Our review of the existing literature revealed significant evidence that RT matters in decision making. Decision makers typically adapt to time constraints leading to significantly different behavior. Consequently, the generalizability of empirical findings from the lab and the scope of existing models of behavior may need to be revised. Future work should be directed toward rigorously testing the robustness of some of the main findings in experimental economics and enriching our models with procedural components that can predict how decision makers adapt to the temporal aspect of decision environments-the following section is devoted to the latter.
Methodology-modeling
Studies of RT fall into two main categories based on how they utilize RT data, i.e., the type of model they employ. The non-procedural (descriptive) approach simply uses RT data as a diagnostic tool, thereby not requiring the specification of a model of RT per se. Consequently, the informativeness of such an approach is restricted to comparing RT across treatments. This approach can still inform us about the appropriateness of a model, the existence (or not) of significant heterogeneity in subjects' behavior and ultimately add another criterion upon which to base classification of subjects into types. A prime example is the dual-system approach, where RT is used to classify actions/behavior as instinctive or deliberative. As of now, the majority of strategic DM studies in the literature have adopted this type of analysis. Procedural models are more falsifiable though: in addition to choice predictions they also make RT predictions, thereby sharpening model selection and comparison-see Sect. 4.4 for more details. The reader ought to relate the following discussion back to Table 4 to fully understand which processes and types of adaptation these competing models can capture.
3.1 Dual-system models Dual-system (or dual-process) theories, based on the assumption that the human brain is figuratively comprised of two different systems, are increasingly being applied to decision making (Kahneman 2011). For an overview of the implications of dual-system models for economic behavior, see also Alós-Ferrer and Strack (2014) and other articles in the special issue on dual-system models in the Journal of Economic Psychology of which it was a part. System 1, the intuitive system, is conceptualized as being influenced by emotions, instinct and/or simple cognitive computations occurring below the level of consciousness. Decisions are made quickly and do not require vast reams of information. This system is viewed as part of the earlier evolution of the human brain and tends to be associated with ''older'' areas of the brain, e.g., the fight-or-flight system. System 2, the deliberative system, is conceptualized to operate on the conscious level and involves higher-level cognitive processes. Decisions are made more slowly and can involve conscious information search. This system is viewed as a more recent evolution of the human brain and its usefulness involves the ability to override the instinctual responses of System 1 when necessary, or to plan a cognitive response in a new environment. Although there is evidence of some degree of localization of these systems, the double-dissociation studies often presented as evidence of two literally distinct systems at the neural level is not without controversy-see Keren andSchul (2009), Rustichini (2008) and Ortmann (2008) for critiques and comparisons of unitary versus dual system models.
We consider standard dual-system models to be primarily descriptive models of behavior rather than procedural models. We base this assessment on how dualsystem models are applied rather than their potential. Typically they are used to classify behavior as instinctive or deliberative. The inherent freedom in classifying behaviors as instinctive or deliberative is an important issue with the dual-system approach, particularly for strategic DM. Rubinstein (2007) uses the following possible classifications for an instinctive response, depending on the strategic structure of the game.
1. The number of iterations required to reach the subgame perfect NE. 2. The strategy associated with the highest own payoff. 3. The number of steps of iterated dominance required to solve a game. 4. The strategy selected by self-interested individuals.
There are other criteria that could define an instinctive response. In one-shot games, Guida and Devetag (2013) find that RT is smaller for games with a focal point compared to those without. In sum, definitions of instinctive responses can be very task-and context-dependent. The contradictory findings for games where social preferences are dominant provide striking evidence of this. Some studies conclude that RT is lower for self-interested choices (Brañas-Garza et al. 2016;Fischbacher et al. 2013;Matthey and Regner 2011;Piovesan and Wengström 2009), whereas other studies find that the equitable or ''fair'' split is associated with a lower RT (Cappelletti et al. 2011;Halali et al. 2011;Lotito et al. 2013). Under the auxiliary assumption that instinctive choices require less time, these studies arrive at opposing conclusions of what behavior has evolved to be instinctive. Furthermore, as already briefly indicated, the use of reverse inference-observing which choices are faster and declaring them to be intuitive-has been contested (Krajbich et al. 2015). The basic idea of these critical authors makes use of people's welldocumented heterogeneity, for example in social preferences, and they propose essentially that one's basic disposition (being selfish, or being altruistic for example), determines what one considers intuitive. An alternative to the instinctive versus deliberative dichotomy, relates the computational complexity of different (procedural) decision rules to endogenous or exogenous RT .
Extending the currently primarily descriptive models to include procedural submodels for each system, and an explicit mechanism for how the two systems interact, would transform them into procedural models. Since System 2 can override System 1, a complete theory would require a specification of how, and when, this occurs. Empirical findings suggest that System 2 is less likely to control the decision if there is time pressure, cognitive load, scarcity of information, etc. (Kahneman 2003). However, the multitude of switching mechanisms currently proposed combined with the dual systems, which individually can account for different behavior, leads to the possibility of ad-hoc explanations of empirical findings.
A new generation of dual-system type models address these concerns by explicitly modeling the interaction of the systems. Models of dual selves do so by explicitly defining the role of each self and imposing structure on their strategic interactions Levine 2006, 2012;Fudenberg et al. 2014). The longrun self cares about future payoffs, whereas the short-run self cares only about shortrun, typically immediate, payoffs. The short-run self is in control of the final decision made. The long-run self seeks to influence the utility function of the shortrun self, but incurs a self-control cost. Such an explicit representation of the dual selves and their interaction permits sharper predictions of behavior than standard dual-system models. While these models do not explicitly account for time, it is possible to operationalize RT with the auxiliary assumption that it is increasing in the cost function of self-control. Achtziger and Alós-Ferrer (2014) and Alós-Ferrer (2016) propose a dual-process model in which the interaction between a faster, automatic process and a slower, controlled process is explicitly defined. The model's RT predictions, for both erroneous and correct decisions conditional on the The BCD of response time analysis in experimental economics 399 degree of conflict or agreement of the two processes, were empirically verified in a belief-updating experiment. Spiliopoulos (2016) similarly validates the model's qualitative RT predictions in repeated games, where the automatic process is specified as the win-stay/lose-shift heuristic and the controlled process as the pattern-detecting reinforcement learning model introduced in Spiliopoulos (2013). Conflict between the two processes led to longer RT, and also influenced RTs conditional on the interaction between conflict and which process the chosen action was consistent with.
Heuristics and the adaptive toolbox
Heuristics-often referred to as fast and frugal-in the tradition of the ecologicalrationality program (Gigerenzer et al. 1999;Ortmann and Spiliopoulos 2017), are simple decision rules that often perform as well, if not better, than more complex decision rules for out-of-sample predictions, i.e., cross-validation. Heuristics are particularly amenable to response time analyses because their sub-processes and interactions are typically explicitly specified in the definition of the heuristic. 7 Consequently, RT can be defined as an increasing function of the number of elementary information processing (EIP) units required to execute a decision rule (Payne et al. 1992(Payne et al. , 1993. EIPs can be thought of as the lowest level operations required for the execution of a computational algorithm. These would include retrieving units of information, and processing them, e.g, mathematical operations such as addition, multiplication, subtraction, and magnitude comparisons. While originally applied to individual DM, calculate the EIPs of popular decision strategies for normal-form games, and find that under time pressure players shift to strategies that are less complex, i.e., are comprised of fewer EIPs. Another class of heuristics that have been applied to strategic DM are decision trees, which structure the decision processes as a series of sequential operations conditional on the history of prior operations, eventually leading to a terminal node that determines the final decision. Empirical investigations of decision trees in the ultimatum game can be found in Fischbacher et al. (2013) and Hertwig et al. (2013). The Adaptive-Toolbox paradigm (Gigerenzer and Selten 2002) posits that decision makers choose from a set of heuristics, and that a heuristic's performance depends on the exploitable characteristics of the current environment. A decision maker is therefore faced with the task of how to match the appropriate heuristic to environmental characteristics. Obviously, any such choice will be affected by RT. How, in particular, are heuristics or strategies chosen if the decision maker has no prior knowledge of the relationship between heuristics' performance and environmental characteristics? For individual DM tasks, Rieskamp and Otto (2006) find evidence that subjects use a reinforcement-learning scheme over the available heuristics. For strategic DM, Stahl (1996) concludes that subjects often apply rulelearning, which is essentially a form of reinforcement learning over a set of decision strategies. Closely related to this approach is the literature on evolution as the selection mechanism of decision rules, e.g., Engle-Warnick and Slonim (2004); Friedman (1991).
Models of iterated strategic reasoning
Models of bounded rationality incorporating finite iterated best responses, such as the iterated deletion of dominated strategies, cognitive hierarchy (Camerer et al. 2004) and Level-k models (Costa-Gomes et al. 2001;Stahl and Wilson 1995), make implicit predictions about RT. Although evidence in favor of these models has been based on choice data, there are falsifiable RT predictions that would provide further useful information. Cognitive hierarchy or Level-k models implicitly produce an ordinal ranking of RT over the degree of sophistication within a decision. 8 For example, since Level-k agents must solve for the actions of all prior k À 1 level players to calculate a best response, RT is a monotone increasing function of the level, k.
Substantive models augmented with auxiliary assumptions
The joint modeling of choice and RT is not necessarily restricted to explicitly designed procedural models, but can be accomplished by redefining models of substantive rationality. For example, the EU maximization problem can be modified in the following ways: 3.4.1 The addition of constraints that capture cognitive costs, bottlenecks, and limitations to the standard maximization problem The addition of a constraint to an unconstrained optimization problem can have RT implications if the constraint can be explicitly linked to time. For example, Matejka and McKay (2015) connect the Rational-Inattention model (Sims 2003(Sims , 2005(Sims , 2006 to the multinomial-logit choice rule often used to map the expected utility of actions to a probability distribution over the action space. The precision, or error parameter, in the multinomial-logit model is linked to the cost of information acquisition. Agents optimally choose the level of information they will acquire before making a decision.
Modification of the objective function
An alternative approach incorporating RT is based on the premise that the appropriate objective function in the wild is to maximize expected utility per time unit. This assumption is often used in evolutionary biology, where survival depends on the energy expenditure and intake per time unit, e.g., Charnov (1976).
The addition of auxiliary assumptions related to RT
Similarly to the discussion in Sect. 3.2, it may be possible to add auxiliary RT assumptions to substantive models (rather than heuristics) based on the information and operations required by the model, e.g., the number of parameters in a decision maker's utility function. Recall from earlier discussions that in the context of social preferences this implies that a decision maker who is self-interested would exhibit lower RT than one who cares about an opponent's outcome, since the latter requires the additional lookup and processing of their opponent's payoffs.
Search and attentional models
Models in this class explicitly account for information search or acquisition either externally (directly from the environment) or internally (from memory). For example, the Directed-Cognition model of external search (Gabaix and Laibson 2005;Gabaix et al. 2006) extends the agent's objective function to include the opportunity cost of time, and is consistent with empirical evidence that subjects were partially myopic in their assessments of the future benefits and costs of additional information acquisition, thereby circumventing the intractability of a rational solution. Similarly, Bordalo et al. (2012) find that information salience is predictive of RT through the effect of salience on the allocation of attention.
Internal information acquisition from memory is also time-dependent, e.g., memories that are more likely to be needed (are more recent and/or have been rehearsed more times) are retrieved more quickly (Schooler and Anderson 1997). In individual DM, Marewski and Melhorn (2011) leverage the explicit modeling of memory using the ACT-R framework (Anderson 2007;Anderson and Lebiere 1998) to infer which models are appropriate. In strategic DM, forgetting is found to constrain the strategies used by players in repeated games (Stevens et al. 2011).
Sequential-sampling models
One of the main advantages of such models is the clear identification of the underlying process mechanism and the simultaneous modeling of both choices and RT. The instantaneous valuations of each available option are conceptualized as a deterministic drift component, which is a function of the expected payoff of the option, and a random component. Evidence for each option is accumulated over time, as determined by the drift rate and noise. The whole process resembles a random walk with a drift specified by the instantaneous valuations of each option. If there are no time constraints, then a decision is made when the accumulated evidence for any of the options reaches a threshold value. Intuitively, for a given threshold, a lower (higher) drift rate leads to a longer (shorter) mean RT. For a given drift rate, a higher threshold reduces the probability of erroneously choosing the option with the lower mean valuation as the effects of noise are diminished. Alternatively, if a time constraint is enforced, then rather than racing towards a threshold value, a decision is made in favor of the option that has the highest accumulated evidence at the time the constraint is reached.
Early work originated in the context of memory retrieval (Ratcliff 1978). Busemeyer and Townsend (1993) formalized this process for individual DM under risk (referred to as Decision Field Theory). Many variations and related models can be found in the psychology literature and more recently in economics (e.g., Busemeyer 2002;Caplin and Martin 2016;Clithero 2016;Fudenberg et al. 2015;Krajbich et al. 2010Krajbich et al. , 2012Krajbich and Rangel 2011;Ratcliff and Smith 2004;Smith 2000;Smith and Ratcliff 2004;Usher and McClelland 2001;Webb 2016;Woodford 2014). Although strategic DM can also be modeled in this manner, more complex characterizations of the decision processes are necessary. Spiliopoulos (2013) examines belief formation and choice in repeated games, extending a sequential model to capture strategic processes by assuming that the instantaneous drift is driven by an expected value calculation based on payoffs and strategic beliefs-the latter are determined by the retrieval of historical patterns of play from memory.
The first sequential-sampling models proposed a unitary-system model of behavior that can produce a variety of different behaviors by conditioning the decision threshold on the task properties and environment. Consequently, they were viewed as competitors to the dual-system approach, see Newell and Lee (2011). However, interesting hybridizations of dual-systems models and sequentialsampling models have been presented recently. Caplin and Martin (2016) propose a dual-process sequential-sampling model that first performs a cost-benefit analysis of whether accruing further information (beyond any priors) is expected to be beneficial, and then either makes an automatic decision based on the priors if the expected costs exceed the benefits or otherwise triggers a standard accumulation process. The discussion about the appropriateness of dual-system, sequentialsampling and hybrid models is ongoing and in our view deserves the attention it receives. The varying RT predictions of these competing models can be useful in model comparison and selection.
Summary
We have presented a multitude of different models, often arising from opposing schools of thought, e.g., simple heuristics versus optimization under constraints, single versus dual-system models. The presented models also differ significantly in terms of whether they explicitly incorporate decision processes or address only the functional, e.g., according to Marr (1982), the former operates on the algorithmic and the latter on the computational (or functional) level. We are partial to models operating at the algorithmic level or what we refer to as procedural modelingfurther discussed in Sect. 6.1. However, operating at a higher level of abstraction can also have benefits, including simplicity. We suspect that the type of model chosen for RT analysis will be highly dependent on a researcher's proclivity; however, we encourage model comparisons between these different types of models. Furthermore, it may be the case that different types of models operate at varying degrees of time constraints; in this case we argue for a better understanding of the scope of these models and under what conditions each one is triggered in human reasoning. In the Introduction, we expressed concerns regarding the external validity of standard experiments that do not account for time constraints and the opportunity cost of time by assuming virtually unlimited, costless information search and integration. We argue that external validity can be improved by increasing experimental control through RT experiments (discussed in Sect. 4.3), and that such experiments allow us to thoroughly investigate the speed-performance relationship (discussed in the following section), which is particularly relevant for decisions in the wild.
Mapping the relationship between RT and performance
An often investigated relationship is the speed-performance or speed-accuracy trade-off. The difference between accuracy and performance is subtle but important. The former is a measure in the choice space, whereas the latter in the consequence space, which is essentially measured by the payoffs derived from a choice. For example, measures of accuracy include the proportion of actions that were dominated, the proportion that were errors (when clearly defined)-note that these measures do not capture the cost to the decision maker of said errors. However, if time is scarce or costly, fast errors may be optimal if they have a relatively small consequence on payoffs, and permit the allocation of time-and therefore reduced probability of an error-to decisions with higher payoffs.
A key insight of the ecological-rationality program (Gigerenzer 1988;Gigerenzer et al. 1999Gigerenzer et al. , 2011Gigerenzer and Selten 2002;Ortmann and Spiliopoulos 2017) is that, in contrast to claims by researchers in the original adaptive decision maker tradition (Payne et al. 1988(Payne et al. , 1993, more speed is not necessarily bought at the cost of lower performance. We note that this surprising result is conditional on an important methodological innovation, cross-validation, that only recently has found the appropriate appreciation in economics (e.g., Erev et al. 2017;Ericson et al. 2015)-see also Ortmann and Spiliopoulos (2017) for other references and details.
Economists seem well advised to thoroughly map the speed-performance relationship across classes of strategic games, and to do so possibly-at least for certain research questions-also by way of cross-validation. Obviously, for strategic DM it is necessary to define both the class of game and the strategies that opponents are using. Determining which classes of games we can expect realized payoffs to be negatively or positively related to time pressure or RT is an open question that seems worth investigating.
There exists less work on the speed-performance relationship compared to the speed-accuracy relationship in strategic DM, as researchers have focused on variables in the action space such as cooperation rates, error rates, degree of sophistication, or equilibrium selection. For example, Rubinstein (2013) finds a negative relationship between RT and (clearly defined) errors, but no relationship between RT and the consistency of behavior with EUT in individual DM tasks. However, an explicit discussion of whether RT is related to the actual performance of players is notably absent, albeit easily remedied. As discussed in ''Appendix 2.2'', although a positive relationship between RT and the level of sophistication in reasoning seems intuitive and supported by the available evidence, in some games decreasing sophistication may actually lead to higher payoffs for all players of a game-recall the game in Table 5. Similarly, the findings by Guida and Devetag (2013) suggest that focal points are increasingly chosen under time pressure-in games where these focal points may help players to coordinate, this may result in higher payoffs but not necessarily so. We are aware only of three economics studies, already mentioned earlier, Kocher and Sutter (2006) in individual DM and Arad and Rubinstein (2012); in strategic DM that explicitly relate performance to RT-more attention to the consequence space rather than the action space seems desirable.
If decision makers explicitly consider both performance and the necessary RT to achieve various levels of it, then an important unanswered question is how they choose the exact trade-off point (assuming a negative relationship exists between performance and RT)? Do they strategically choose this point conditional on task characteristics such as task difficulty, other concurrent cognitive load, types of time constraint, etc.? We present an indicative selection of hypotheses under the assumption that speed and performance are negatively related: (a) Unconstrained Expected Utility maximization: The effect of RT is completely ignored, and subjects simply aim to maximize their expected utility. (b) Unconstrained Expected Rate of Utility maximization: The objective function that is maximized is the expected utility per time unit. (c) Performance satisficing: An aspiration level of performance (utility) is set, and RT is adjusted to keep performance constant. (d) Time-constraint satisficing: A time-pressure constraint is externally set and is exhausted, thereby determining the performance.
We present some evidence from individual DM tasks for consideration. If the decision maker has the opportunity to repeatedly engage in the same task, then there exists a closed-form solution for the decision threshold that optimizes the reward rate for choice sets with two options (Bogacz et al. 2006). Hawkins et al. (2012) present evidence that subjects engage in performance satisficing rather than maximization. Satisficing requires the specification of how high the performance criterion is set, and how this may depend on prior experiences. Balci et al. (2010) find that subjects facing a two-alternative forced-choice task exhibit a bias towards maximizing decision accuracy rather than the reward rate initially, i.e., adopted a suboptimal speed-accuracy trade-off. However, after repeated exposure to the task subjects' behavior moved significantly towards the maximization of the reward rate. Young subjects are more likely to seek a balance between accuracy and speed than older subjects; the former tend to maximize reward rates, especially with experience and extensive feedback, whereas the latter maximize accuracy, i.e., minimize errors (Starns and Ratcliff 2010).
Explicit experimental control of RT
At first sight, experimental studies without any explicit exogenous constraint on RT may be immune from RT considerations. However, implicit time constraints may be inadvertently imposed by the experimenter or inferred by subjects. In consequence, studies that are otherwise similar may not be directly comparable if the implicit RT pressure varies across them. We conjecture that differences in implicit time pressure may drive some of the seemingly contradictory or non-replicable results in the literature if behavior is adaptive. Implicit time constraints may exist in many studies where RT is supposedly endogenous for the following reasons: (a) Recruitment materials usually mention the expected length of the experiment, which is likely to cue subjects to the experimenter's expectation of the time it takes to complete the task. (b) Experimental instructions often include information that may influence the amount of time a subject decides to allocate to tasks. Strategic interaction of subjects, for example, might imply a weak-link matching scheme where the slowest player determines the time the session takes. (c) For practical reasons-such as random matching for the determination of payoffs, or to avoid disturbances from subjects exiting early-subjects might have to wait for all participants to finish before they are allowed to collect payment and leave. Similarly, subjects may be delayed whilst waiting for other subjects to enter their choices before moving on to the next round of a repeated game. (d) Subjects may be affected by many subtle cues in the wording of instructions. Benson (1993) and Maule et al. (2000) are cautionary tales of the effects of instructions on perceived time pressure-behavior was significantly influenced by different (loaded) instructions describing the same objective time limit.
Concluding, the loss of experimental control associated with implicit time constraints is a potential problem. Consequently, experiments with explicit exogenous time constraints may be significantly more comparable-and less noisy within a particular experiment-as they do not run the risk of participants subjectively inferring implicit time pressure. Alternatively, the adverse impact of implicit time constraints can be reduced without imposing explicit time constraints by permitting subjects to engage in an enjoyable activity, e.g., surf the internet if they have completed all their tasks early. 9 We would also encourage accounting for implicit time constraints in meta-analyses of studies-to the best of our knowledge this has not been done before.
Improved model selection, identification and parameter estimation
Model selection and identification, as we have argued earlier, can be sharpened by the use of RT. Models differ in their explanation of how an adaptive decision maker will react to time constraints and, ultimately, how observed behavior will change. As mentioned, differential RT predictions are a valuable aid in comparing competing models of behavior, e.g., Bergert and Nosofsky 2007;Marewski and Melhorn 2011. Significant information can be gleaned from the relationship between RT and candidate variables of observed behavior, such as the error rate, realized choices, adherence to theoretical concepts such transitivity, equilibrium concepts etc. In short, models that make RT predictions in addition to choice are more structured, rendering them more falsifiable as both RT and/or choice data can refute them.
Classification of heterogeneous types
RT data can sharpen the classification of subject types, particularly in cases where two or more different decision strategies prescribe the same, or very similar, observed choices. The Allais-Paradox task in Rubinstein (2013) is a case in pointpatterns of choices differed significantly between subjects with low and high RT. Another example involves distinguishing between two types of learning: (a) incremental learning, where RT is expected to be smoothly decreasing with experience, and (b) eureka or epiphany learning, where RT should abruptly fall when subjects have an important insight that has a lasting impact on play (Dufwenberg et al. 2010;McKinney and Huyck 2013;Schotter and Trevino 2014b).
RT as a proxy for other variables
RT may be used as a proxy for effort (e.g., Ofek et al. 2007;Wilcox 1993) to examine the effects of variations in important variables such as experimental financial incentives, labor productivity incentives, and other general incentive structures. For example, RT can be used as a proxy for effort in the debate regarding financial incentives in experiments. A positive relationship between RT and the magnitude of financial incentives, ceteris paribus, would support the viewpoint that incentives matter. Alternatively, RT may also be a proxy for the strength of preference for an option-see the empirical evidence (e.g., Chabris et al. 2008Chabris et al. , 2009) in favor of a negative relationship between RT and the difference in the strength of preference among available options. Such a relationship is also predicted by the sequential-sampling models discussed in Sect. 3.6.
Identification
The use of RT-above and beyond choices only-is beneficial for identification purposes, however it is not a panacea. Recall the extensive discussion in Sect. 2.1.2 about reverse-inference and identification in games where social preferences are important. The interaction of players in strategic DM provides an additional layer of complexity in the identification of processes, e.g., beliefs may play an important The BCD of response time analysis in experimental economics 407 role. Consider social-dilemma games where RT constraints are implemented to examine their causal effect on the degree of cooperation or pro-social choices. If it is common knowledge that all players face time pressure in a treatment, then players may change their beliefs about how cooperative their opponents will be. Consequently, changes in social preferences and beliefs would be confounded, rendering the attribution to either impossible. These issues can be alleviated by careful choice of experimental design and implementation details, and the concurrent collection of other process measures such as information search and beliefs. For example, Merkel and Lohse (2016) explicitly collect players' beliefs about their opponents' likely behavior across different time treatments. Identification may also be hampered in cases where RT constraints have a differential effect on other treatments, i.e., when RT interacts with the other treatments. For example, consider a public good experiment played under time pressure, where the treatments manipulate the number of players (few versus many). If increasing the number of players makes the game more complex or difficult, then a specific level of time pressure may have a greater relative impact in the treatment with more players. Such cases are easily remedied with an appropriate full factorial 2 Â 2 design where RT (endogenous versus time pressure) and the number of players (few versus many) are both manipulated, as the main effects of each factor and their interaction can then be recovered.
Irregular RT distributions, outliers and non-responses
The question of whether extreme values are regarded as outliers or not, and if so, how they are handled in the data analysis is of considerable importance and consequence-recall the debate in Rand et al. (2012), Tinghög et al. (2013) reviewed earlier. Very short RTs may arise from fast-guessing, or very long RTs from subjects that are not exerting much effort and are bored. 10 Furthermore, the use of time pressure often leads to a number of non-responses if subjects do not answer on time. This leads to a selection problem if non-responses are correlated with subject characteristics. How these RT idiosyncrasies are treated is of paramount importance.
Consequently, endogenous RT distributions tend to be non-normal (left-truncated at zero), heavily skewed and often consist of extreme (low and high) values. This renders analyses using mean RT and ANOVA problematic. Whelan (2010) recommends the use of the median and inter-quartile ranges for such cases, but notes that since true population medians are strongly underestimated for small sample sizes, median RTs should not be used to compare conditions with different numbers of observations. Another common solution is to appropriately transform the RT distribution into an approximate normal distribution, usually through the use of a log-transform. Outliers can have a significant impact on parametric summary statistics; possible solutions include using (a) robust non-parametric statistics, (b) Student t-distributions that allow for fat-tailed distributions (e.g., Spiliopoulos 2016), and (c) hierarchical modeling (see Sect. 6.3). We refer the reader to Van Zandt (2002) and Whelan (2010) for an extensive discussion of RT distribution modeling.
Between-subject heterogeneity can be attributed to two sources. Parameter heterogeneity arises from subjects using the same type of model, i.e. identical functional forms, but with individual-specific parameters. Model heterogeneity arises from subjects using completely different models, e.g. heuristics that cannot be nested within a more general model.
It is imperative to model heterogeneity directly as pooling estimation affects parameter recovery and model selection (Cabrales and Garcia-Fontes 2000;Cohen et al. 2008;Erev and Haruvy 2005;Estes and Maddox 2005;Estes 1956;Wilcox 2006). Consequently, econometric models of RT should also allow for different RTs across subjects and heterogeneous effects of various RT determinants (e.g., Spiliopoulos 2016). Modeling both parameter and model heterogeneity requires the estimation of both finite mixtures and random-effects or hierarchical econometric models presented in Sect. 6.3. See our working paper (Spiliopoulos and Ortmann 2016) for an extended discussion.
RT measurement error
Experimentalists will typically delegate RT data collection automatically to whatever software package they use to set up the experiment, or in rarer cases may code their own experiment from scratch. While the accuracy of RT data collection has not been extensively examined in economics, more work has been done in psychology. We note that in psychology response times are often on the order of hundreds of milliseconds compared to seconds in economics. Therefore the accuracy of data collection for such finer graduations is not as important in experimental economics. Variations in RT estimates can be caused by any combination of hardware, software, and network latencies (for online experiments). The importance of these variations depends on their magnitude relative to the absolute RTs in the experiment and whether they are systematic or random, i.e., whether the noise can be expected to average out for a large enough number of observations. The general conclusion is that while absolute measures of RT may not be reliable across differences in these three sources of noise, relative measures of RT remain relatively faithful. Furthermore, the standard deviation of the induced noise is very low compared to the scale of RTs that experimental economics deals with. 11 The most popular experimental economics software is z-Tree (Fischbacher 2007) and it includes the ability to internally measure response times. Perakakis et al. (2013) propose an alternative method based on photo-sensors that capture changes in the presentation of information on screen to counter-act the possible miscalculations arising from the computer's internal timing. Their photo-sensor system recorded response times that were on average 0.5 s lower than those recorded internally by z-Tree. While this difference may be problematic if the study attempts to link the timing of events with other biophysical markers such as heart-rate, it did not adversely affect the conclusions drawn from RT analyses across treatments. In economics, we will typically be interested in relative RTs and changes across treatments rather than absolute RTs; therefore, z-Tree should be accurate enough for the vast majority of applications. Seithe et al. (2015) introduced a new software package (BoXS) specifically designed to capture process measures in strategic interactions including RT. They present evidence that this software's RT accuracy is approximately AE50 ms (when presenting information for at least 100 ms), which is more than adequate for economic applications.
Concluding, for online experiments we can expect significant variations arising from both hardware and network sources, i.e., RT measurements will be relatively noisy. However, online experiments usually have a large number of subjects so that the noise often cancels out. For experiments in the laboratory, there does not seem to be a significant problem in the accuracy of RT measurements in the most common types of applications.
Procedural modeling
The existing RT literature is dominated by non-procedural (descriptive) rather than procedural modeling. We believe that in many instances procedural models are more useful than non-procedural models, as the former allow for comparative statics or quantitative predictions regarding the joint distribution of choice and RT. Such models can be falsified either by incorrect choice or RT predictions, thereby increasing the statistical power of experiments and associated hypothesis tests. Other process measure variables discussed in the next section could increase the power even further. Procedural models also provide a coherent framework within which to organize and define exactly how behavior adapts to time constraintsvarious mechanisms are discussed in ''Appendix 2''.
Concurrent collection of other process measures
Few existing studies in experimental economics collect other process measures beyond choice and RT, some notable exceptions can be found in Table 3. Examples of other decision or process variables include information search using Mouselab or eye-tracking techniques (Crawford 2008), response dynamics (Koop and Johnson 2011), provisional choice dynamics (Agranov et al. 2015), belief elicitation (Schotter and Trevino 2014a), communication between players (Burchardi and Penczynski 2014), verbal reports (Ericsson and Simon 1980), physiological responses and neuroeconomic evidence (Camerer et al. 2005). However, it should be kept in mind that collecting these process measures is more disruptive than RT, and therefore their collection could influence behavior. The reader is referred to Glöckner (2009) and Glöckner and Bröder (2011) for examples of procedural models predicting multiple measures: RT, information search, confidence judgments, and fixation duration. Other examples of the value added of process measures beyond choice data include Johnson et al. (2002), Costa-Gomes et al. (2001) and .
Hierarchical latent-class modeling
Hierarchical latent-class models can be an effective ally in capturing heterogeneity and outliers in the data. Estimating models per individual-whilst capturing individual heterogeneity-may not be the best line of attack due to the large number of free parameters and susceptibility to overfitting. Instead, we propose hierarchical latent-class models that capture both types of between-subjects heterogeneity with a reduction in free parameters (Conte and Hey 2013;Lee and Webb 2005;Scheibehenne et al. 2013;Spiliopoulos 2012;Spiliopoulos and Hertwig 2015). The latent classes capture model heterogeneity, whereas the hierarchical structure models parameter heterogeneity. 12 The latent-class approach yields both prior and posterior (after updating the prior with the observed data) probabilities of subjects belonging to the specified latent classes. An additional bonus to such an econometric specification is that outliers can automatically be identified as belonging to one of the classes. Furthermore, latent-class models can also be used for within-subject heterogeneity (Davis-Stober and Brown 2011; Shachat et al. 2015), e.g., the adaptive use of heuristics.
Cross-validation
Models of choice and RT ought to be subjected to the same strict demands that we impose on existing models. Specifically, procedural models should be competitively tested on out-of-sample data to ascertain their predictive ability or generalizability (Ahn et al. 2008;Busemeyer and Wang 2000;Yechiam and Busemeyer 2008), such Dual-system Actions defined as contemplative or instinctive using RT can be used to create a player typology based on how contemplative a player is. This typology is found to be predictive of behavior in out-of-sample games Ten different games Agranov et al. (2015) en Iterated/strategic reasoning Large proportion of naive types, whose players often switch choices haphazardly over time, without exhibiting increasing sophistication. Strategic players switch choices less and their behavior increases in sophistication over time 2/3 guessing game (pbeauty) Evans et al. en,tp,td Dual-system, cognitive dissonance, sequential sampling Greater decision conflict is associated with slower RT, leading to an inverted-U shaped relationship between RT and cooperation Questionnaire Prisoner's dilemma, oneshot and repeated public goods games, trust game Tinghög et al. (2013) tp,td
Dual-system No effect of time pressure on cooperation rates
One-shot and repeated public goods and prisoner's dilemma games Arad and Rubinstein (2012) en Iterated/strategic reasoning Longer RT was associated with higher average payoffs Colonel Blotto game Glazer and Rubinstein (2012) en Iterated/strategic reasoning The more difficult it is to be persuasive, the longer the RT Persuasion game Table 3 continued Studies
Conclusions
Other measures Task Rand et al. (2012) tp,td
Dual-system
The level of contributions is inversely related to RT (whether endogenous or exogenous) One-shot public goods game Cappelletti et al. (2011) tp
Dual-system
Proposers under time pressure offer more, however there is no effect of cognitive load on offers Ultimatum game Grimm and Mengel (2011) td and coordination (C) games Piovesan and Wengström (2009) en Social preferences, emotions Lower RT is associated with selfish choices both between-and within-subjects
Dictator game
The BCD of response time analysis in experimental economics 415 i.e., where not error-prone. However, they did not find more confusion about the game in a time-pressure treatment One-shot and repeated public goods game Merkel and Lohse (2016) en,tp,td One-shot 3 Â 3 normal form games Turocy and Cason (2015) en Incentives RT tends to be longer for higher signals in first-price auctions, RT is independent of signals in second-price auctions First-and second-price auctions Schotter and Trevino (2014b) en Search and attentional, eureka learning RT can be used to predict out-of-sample behavior with more accuracy than the equilibrium prediction Global games Halali et al. (2011) en Dual-system, cognitive control RT is less when rejecting an unfair offer than accepting it, egodepletion led to an increase in the rate of rejection of unfair offers Ultimatum game Bosman et al. (2001) td
Emotions
Behavior and self-reported emotions are not affected by a time delay of 1 h
Elicitation of expectations and emotions
Ultimatum game The BCD of response time analysis in experimental economics 417 as in the context of a large-scale model tournament (Ert et al. 2011;Spiliopoulos and Ortmann 2014). A commonly used technique is that of cross-validation, which requires that experimental data be partitioned into an estimation and a prediction dataset. Models are then fitted on the estimation dataset, and their performance is judged on the prediction dataset. This technique is effective in comparing models of varying flexibility as it avoids the problem of over-fitting by complex models. This is, in fact, the crux of one of the main tenets of the ecological-rationality program (Gigerenzer 1988;Gigerenzer et al. 1999Gigerenzer et al. , 2011Gigerenzer and Selten 2002)-for a critical review see Ortmann and Spiliopoulos (2017). Simple heuristics can outperform complex decision models on unseen data exactly because they have less ability to overfit to noise or uncertainty in the environment. Interestingly, cross-validation has not been extensively used in the RT literature with the exception of a few notable studies. Chabris et al. (2009) and Schotter and Trevino (2014b) find that RT can be predictive of intertemporal choice and behavior in global games, respectively. Importantly, Rubinstein (2013) and Rubinstein (2016) find evidence that RT can be predictive of both individual DM and strategic DM choices across tasks. These results suggest that the degree of mutual information in choice and RT data is significant and can be exploited-we hope to see more analyses of this kind.
Experimental implementation
We have argued that we expect significant between-subject heterogeneity in strategic DM. In conjunction with the inherent stochasticity of choice and the practical limitations on the number of observations from empirical studies, this will make inference challenging. In some cases, an effective solution in terms of experimental implementation is to design within-subject treatments, thereby eliminating the between-subjects source of variability. However, the decision to use a between-or within-subject design implies a trade-off (Charness et al. 2012); a within-subject design will limit the number of tasks that can be examined.
The RT literature using individual DM and strategic DM tasks has so far used rudimentary designs where behavior is observed as a function of different RT constraints, thereby revealing the RT-behavior profile. Existing experiments often use a t-treatments design, where t is the number of treatments with different time constraints and is usually equal to two or three. We will argue that we can further augment the usefulness of RT data by expanding experimental designs by, firstly, increasing the number and type of time treatments and, secondly, by varying another variable, namely the difficulty of the task. What we propose is a t  d factorial design (d ¼number of treatments with varying difficulty) where the manipulation of the time constraint reveals the speed-performance profile and manipulation of task difficulty reveals shifts of this profile. Task difficulty can be defined in various ways, which may differentially affect the speed-performance profile. Some examples of defining characteristics of difficulty for strategic DM are: 1. The size of players' action sets 2. The number of players in a game 3. The distance between a player's (subjective) expected payoffs per action 4. The uncertainty about opponents' types 5. The existence of imperfect information 6. The lack of focal points 7. The presence of cognitive load Moving beyond the two-dimensional speed-performance trade-off to a threedimensional speed-performance-difficulty profile, will generate more conflicting predictions between models, thereby aiding model comparison and identification. Also, note that an experimental design that explicitly manipulates difficulty addresses one of the critiques put forth by Krajbich et al. (2015); namely, that differences in task difficulty and the degree that behavior is instinctive may be confounded in existing studies that do not control for the former. We are aware that this line of attack might run into hostile budgetary defense lines.
Finally, researchers should consider when and how to disclose a time-constraint to subjects. If RT is exogenous, then specifics of the constraint can be announced in advance, or revealed to subjects during the decision process. For example, subjects may not be told how long they will have to make a decision but may be alerted in real-time when they must decide. Knowledge of the constraint may induce a subject to significantly change their decision process, e.g., under extreme time pressure they may switch to a simple heuristic, or in the dual-system approach, they may switch from the deliberative and slow System 2 to automatic and fast System 1 (Kahneman 2011). If subjects do not know what the time limit will be, they may respond in the following two ways: (a) be more likely to use the same decision process and simply terminate when the time limit is announced and (b) make a provisional fast decision and then search for a better alternative, as in Agranov et al. (2015).
Discussion
We have presented the state of the art of response time (RT) analysis in cognitive psychology and experimental economics, with an emphasis on strategic decision making. Experimental economists have only recently directed attention to RT, in stark contrast to experimental psychologists. A comparison of the methodology of these two groups exposes an important difference-experimental psychologists predominantly use procedural models that make explicit predictions about the joint distribution of choice and RT, while experimental economists predominantly restrict their analyses to descriptive models of RT. We offered arguments regarding the advantages of RT analysis, particularly in conjunction with procedural modeling. We are specifically concerned that investigating decision making in the lab without any time-pressure controls, or at least an explicit opportunity cost of time, might limit the external and ecological validity of experiments. In our view, this void in the literature deserves more attention than it has attracted. We envision significant advances in our understanding of behavior and its relationship to RT. Our assessment of the potential of RT analysis is partially inspired by results in cognitive psychology and we recommend a research agenda for strategic DM that parallels that of individual DM. At the very least, there is no reason not to collect RT data for computerized experiments as their collection can be done without disrupting or influencing the decision process and is costless. RT data provides further information improving model selection, the identification of decision processes and type classification. The collection of RT data is, of course, not a panacea: although it increases our ability to identify models and decision processes, it does not necessarily provide full identification. Furthermore, there are some important challenges that experimental economists face; many of these challenges are unique to strategic decision making, arising from the complex interaction of agents. We presented desiderata aimed at addressing these challenges and unlocking the potential of RT analysis.
We have discussed the multitude of ways that strategic players can adapt to time constraints (and offer a formal framework in ''Appendix 2''). For example, players may change how they search for information, how they integrate information, and how they adapt their beliefs about the strategic sophistication of opponents. Our discussion of the models in the literature concludes that most models need to be extended in new ways to fully capture these possibilities.
The majority of the literature has been devoted to investigating the relationship between social preferences and response time. There is an opportunity for new important work in non-social dilemma games, especially repeated games. Less researched, yet in our eyes, important topics for future work on RT include a thorough investigation of the speed-performance profile, and the relationship between emotions and RT. For the former, important questions include how people decide to trade off speed and performance, and how they allocate time to a set of tasks. For the latter, the role of emotions, such as anger, regret and disappointment, and their effects on response time seem worth further study.
Concluding, we anticipate (and already see realized some of the predictions we made in our 2013 version of this paper) that explicit modeling of RT data will provide important insights to the literature, especially in conjunction with other nonchoice data arising from techniques that turn latent variables of processes into observable variables, e.g., belief elicitation and information search. Extending experimental practices to include RT and time-pressure controls is an important step in the study of procedural rationality and adaptive behavior in an externally and ecologically valid manner. Table 3 presents RT studies on strategic DM sorted into two categories, first published studies and then working papers. Within each category we ordered studies chronologically and within each year alphabetically. We also classify studies along the following dimensions: (a) whether RT was endogenous (en, i.e., no time constraints) or exogenous (tp = time pressure, td = time delay), (b) the type of model used to explain behavior, (c) the type of task, (d) whether other variables were measured, and (e) the main conclusions.
Appendix 2: A framework for time-constrained strategic behavior
We present a framework for organizing the multitude of ways that decision makers can react to time constraints. Our framework allows us to first ask which decision processes are adjusted in response to time constraints, and subsequently how these decision processes are adjusted. To accomplish this we first categorize different types of decision processes by the function they perform. Subsequently, we define and relate specific ways of adjusting to these categories of processes. We are guided by existing results for individual DM tasks. Miller (1960) hypothesized that DMs react to time pressure-or more generally any type of information overload-in several ways. 13 We present the four main time-pressure adaptations from Miller (1960) that we deem most important and that have been robustly verified in the individual DM literature. We extend these specifically to strategic DM tasks and suggest other ways of responding to time pressure that are unique to strategic DM. In our working paper (Spiliopoulos and Ortmann 2016, Table 8) we provide a more detailed comparison of RT analysis across three types of tasks, judgment, individual DM and strategic DM.
Decision processes
A complete procedural model should describe how relevant information is acquired and processed to arrive at a decision. The dynamics of the information-search andintegration processes (including stopping rule) characterize the time required to reach a decision. Reaction time is often modeled as the sum of two main components: the decision component and the non-decision component. To model strategic DM it is useful to break down these two components into sub-processes and their associated response times.
The decision RT component consists of the following sub-processes:
Information-acquisition sub-processes
These processes require time to search for and acquire information. We further divide the acquisition of information into internal and external search. External search involves the real-time acquisition of information (stimuli) from the environment. Internal search is the retrieval of information stored in the memory system.
Strategic sub-processes
Significant time is also required to implement strategic processes (deliberative), such as analyzing a game and forming beliefs about an opponent's behavior (or level of sophistication), elimination of dominated actions, et cetera.
Information-integration sub-processes
Time is required to compare and integrate the available information regarding choices. The input to these processes is the output of the information-acquisition sub-processes, and in the case of strategic DM may also include outputs from the strategic sub-processes if they transform other acquired information.
The non-decision RT component consists of:
Motor function response
Executing the required motor functions to indicate or implement a response also requires time.
Response time is therefore conceptualized as a function of the time required to complete the relevant sub-processes discussed above. The simplest function would be additive and separable; however, the assumption of separability, or independence, of these four components is tenuous. Non-separable processes may be an efficient use of limited cognitive resources and time.
Adaptation to time constraints
We present some robust findings from the psychology literature on the adaptation of decision sub-processes to time pressure. These are primarily derived from individual DM tasks, consequently they are useful for thinking about how informationacquisition and -integration processes may be affected in strategic DM tasks, when strategic processes do not matter (much). Finally, we conjecture the existence of adaptations that are specific to strategic DM tasks, and have yet to be thoroughly investigated in the literature. Table 4 presents our framework in a tabular format, relating decision processes to the type of adaptation, and to the types of models that can account for each adaptation. Since we have not yet presented the types of models extensively, the reader might want to return to this table after reading Sect. 3.
Acceleration
The existing decision sub-processes are performed more quickly without changing strategy at the possible cost of introducing errors (Ben Zur and Breznitz 1981;Edland 1994;Kerstholt 1995;Maule et al. 2000;Payne et al. 1988).
Filtration
Priority and attention to information acquisition and processing is given selectively to information that is perceived as more important (Maule et al. 2000). At the information-acquisition stage, filtration can be manifested in games as the retrieval of information from fewer options (i.e., a player's own actions in the game) or fewer contingencies (i.e., an opponent's actions in the game); this may be observed as fewer lookups or less gaze time per piece of information. Under time pressure, the predominant effect for individual DM tasks is a shift from alternative-to attributebased search and processing. 14 At the information-integration stage, filtration affects the weighting of information in the integration process. Initial evidence on filtration found that negative information was relatively more important than positive information (Ben Zur and Breznitz 1981). However, the robustness of the direction has been contested and may be context-dependent. For example, Edland and Slovic All models are described in detail in Sect. 3: SA Search and attentional models, LEX lexicographic heuristics, DS dual-system, ISR iterated strategic reasoning models, SSM sequential-sampling models, ADT adaptive toolbox, SMA substantive models with auxiliary assumptions 14 Alternatives are the elements of the choice set, whereas attributes are the characteristics of the alternatives that determine their value to the consumer. For example, specific cars (alternatives) may differ in safety, design and price (attributes). Alternative-based search examines the various attributes within the available alternatives and then compares the aggregate value of the alternatives, whereas attribute-based search examines the same attribute across alternatives, one attribute at a time and quite possibly in a very selective manner.
The BCD of response time analysis in experimental economics 423 (1990) find that filtration leads to greater weighting of positive information relative to negative information, whereas Maule and Mackie (1990) find no significant shift in relative importance. Note, any relative shift in attention between positive and negative information has important implications for the risk-taking behavior of individuals. Finally, we hypothesize another adaptation that we term focality enhancement. Under significant time pressure, subjects may attend more to information that has focal properties, e.g., larger payoffs, actions singled out by social norms, et cetera. Memory effects may also play an important role under constrained RT. Memory encoding and retrieval may be adversely affected, e.g., the number of items that are held in short-term memory may be restricted further beyond the proposed limit of 4 AE 1 items (Cowan 2000), which is an update from the 7 AE 2 items argued for in Miller (1956).
Strategy shift
Strategy shift is a change in the type of decision strategy selected. An adaptive decision maker could, for example, choose a heuristic that is effective for the current environment or one that is feasible given time constraints. In strategic DM a player under time pressure may change her strategy because of (a) a change in their belief about an opponent's likely strategy if he is also under time pressure or (b) insufficient time to execute the preferred strategy. Consequently, time pressure may cause a disconnect between the potential sophistication of a player and the realizable sophistication. For example, assume two players are engaged in the following normal-form game (Table 5) and that both players are capable of using a Level-2 (L2) heuristic (Costa-Gomes et al. 2001;Stahl and Wilson 1995). 15 Assume next that due to time pressure they are only able to implement the less demanding Level-1 heuristic. In Table 5, we denote the joint outcomes if players were to use a variety of different Level-k heuristics, or somehow managed to coordinate on the Nash equilibrium (NE)-it is easy to verify that in this example there exists a nonmonotonic relationship between the level of sophistication and payoffs. Of course, this is not necessarily so for other games. If players 1 and 2 both use the Level-2 Superscripts denote the outcomes of both players using a Level-k heuristic (abbreviated as Lk) or the Nash equilibrium (NE) heuristic under no time pressure (which entails beliefs that their opponent is Level-1), then they play M-R and their resulting payoffs are 12 and 41, respectively. If under time pressure the players restrict their beliefs about the opponent to be Level-0, then they both play the Level-1 heuristic (actions U-L) resulting in payoffs of 57 and 58. Thus both players are better off under time pressure. confirm the hypothesis of strategy shift from more to less sophisticated strategies under time pressure in 3 Â 3 normal-form games. Specifically, subjects switched from Nash equilibrium, Level-2 and other more sophisticated decision rules to predominantly using the simple Level-1 heuristic, which ignores the strategic aspects of the game. Since beliefs were not elicited in this study, we do not know whether the driver of the shift was a change in beliefs-we believe this question to be an interesting one that deserves further attention.
Criteria shift
This refers to a change in the level of the decision criterion rather than a change in decision processes or heuristics, e.g. Newell and Lee (2011). Recall that in sequential-sampling models, a decision is made once the evidence in favor of an option reaches the threshold value. Consequently, lowering this value leads to a faster response but typically increases the probability of decision errors and viceversa.
Iteration reduction
Apart from hierarchical beliefs in strategic DM, which rely on iterated computations, there exist other non-belief based strategies that require iterated reasoning. Similarly, we hypothesize that under time pressure fewer iterations will be performed by players. Examples include iterated deletion of dominated strategies and backward or forward induction (or lookahead) strategies. | 2018-05-09T00:43:46.005Z | 2016-03-28T00:00:00.000 | {
"year": 2017,
"sha1": "61849319383a08c27cea9fa6be92e03d466ecc1d",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10683-017-9528-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "d63c2e5d742c836b96d38acc4562a3c3cc806c7c",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Economics",
"Medicine"
]
} |
265659926 | pes2o/s2orc | v3-fos-license | The Archaeological Distribution of the Cuneiform Corpus
: The present study offers a first comprehensive, quantifiable overview of the geographical extent and scale of the cuneiform corpus. Though one of the oldest and longest-lived scripts in history, the sheer size of this corpus, being among the largest discrete bodies of written source material from the pre-modern world, is seldom properly appreciated. We review and evaluate past quantitative assessments of the corpus and current levels of catalogue digitisation and integration, pointing to gaps in general catalogues and principal issues relating to the quantification and interrogation of textual sources at the corpus-level. Combining a newly developed open access spatial index of c. 600 locations from across Europe, Asia, and Africa where cuneiform texts have been found with a quantitative survey of reported finds from scholarly literature, we then proceed to discuss the formation of the cuneiform corpus as an archaeological artefact. Aided by an extremely broad diachronic and diatopic outlook on a uniquely large body of written source material, this study offers an innovative and novel perspective on written corpora as archaeological artefacts.
Introduction
...und hat sie entweder als naiv und laienhaft abgetan oder als Antwort nur ganz vage von einigen hunderttausend Texten dahingemurmelt?(Streck 2010: 36) Known inscriptions from the cuneiform world, retrieved through the scientific excavation and clandestine digging of archaeological sites found across the Middle East over the last two centuries or so, are said to number anywhere from a couple of hundred thousand to several million artefacts (for various estimates, some of which are discussed in the following, see e.g.Pallis 1956: 185-187;Kramer 1962: 299-301;Van De Mieroop 1999: 10-13;Peust 2000;Streck 2010;Reade 2017: 163;Michel 2020: 25).Trivial as they may be, the more serious perusal of such figures is curiously absent not only from the vast majority of introductions to the study of the cuneiform world, but also from related disciplines and comparative research.Authoritative studies on the emergence and spread of comprehensive quantitative survey of the geographical distribution of cuneiform inscriptions in archaeological terms.In our discussion, we review some key points arising from our analyses, centring on the characteristics of spatial distribution in particular.The thrust of our argument focuses on the question of corpus representativity and authority, an attempt to offer a first measure of the intensity of the light that our sources shine on the past.
Metrics of Corpus Scale
We should begin first by defining the units of measure, and some issues of nomenclature.We use 'inscription' in the following with reference to the inscribed artefact as a delineable, physical object.By 'text', we refer to the composition as an entity, not its physical medium.The traditional use of the latter term in cuneiform studies is largely interchangeable with the use of 'inscription' as applied in a range of related epigraphical subdisciplines, where 'text' is used in reference more specifically to the philological element, i.e., the actual writing (for a discussion of this ontology in formal terms, see, e.g., Avanzini et al. 2019: 5-7).A thorough review of this semantic conundrum as it applies to the study of cuneiform is beyond the remit of this study (see Michel 2021 for an insightful review).In using 'inscription' in the following, we simply wish to recognise a need to distinguish clearly between the text and its medium.
As cautioned above, estimates of corpus size are-or at least should be-subject to considerable qualification, and considered with reference to several parameters.Singular metrics are certainly poorly equipped to convey the essential empirical differences between the Natural History of Pliny the Elder and any inscribed Egyptian scarab, both of which would produce the same value-1-if counting inscriptions, but wildly different valuesupwards of 1,000,000 against 1-if counting words.The former emphasises materiality and place, the latter lexical wealth and information content.Measures of corpus size are, in other words, dictated by the divergent empirical and epistemological positions separating subfields more inclined towards epigraphy (the study of inscription) and philology (the study of text), respectively.Corpus word count may be a more meaningful yardstick when assessing the prevalence of Greek or Latin corpora, as both hold several extraordinarily large textual compositions that are duplicated across a large number of manuscripts.As repetitions of the same textual entity would inflate the statistically relevant number of unique attestations of discrete works or of a single word, word count metrics are particularly relevant for assessing the relative prevalence of linguistic entities in a corpus (Peust 2000: 252).This approach, therefore, also emphasises an etic attitude towards philological and linguistic analyses, in which language takes precedence over materiality.
Corpus inscription count, on the contrary, is a very common metric when dealing with corpora such as cuneiform, Runic, or Old Arabic.Here, the individual inscription is almost always unique and often carries a very tangible, typically archaeological, provenience, allowing us to place the creation, use, and deposition of the inscription in place and time with an oftentimes high level of spatial and temporal accuracy.Introducing a word count metric here is possible in theory, if mostly barred by lack of properly refined data in practice.Word counts are also hampered by issues of definition at the micro-level: how to isolate discrete words, how to approach fragments carrying only a grapheme or two, and so on.The ability of inscription count metrics to more meaningfully relate the prevalence of writing as a part of a given material culture horizon makes them more immediately relevant to the indexing and evaluation of archaeological data, however.
Different metrics, in short, cater to diverging disciplinary outlooks.Word count may provide a fuller appreciation of the evidential sample available to students of languages, as well as a more accurate measure of the full extent of textual information nested within a corpus.Inscription count is likely to provide us with a better impression of the spatial, chronological, and ultimately social permeability of writing from a material perspective, when accounting for physical and cultural processes of preservation and decay.Measuring the number of inscriptions is a more reliable vector for assessing the prevalence of writing within material culture horizons of a given society, as can be inferred from several studies of the production and consumption of texts in more recent historical periods where the number of discrete works, rather than their content, is the adopted unit of measure (Buringh/ van Zanden 2009;Xu 2013).To drive home this point, the position of book production as a measure of learning, education, and economic development worldwide has been central to policymaking of various agencies of the United Nations ever since the end of the Second World War (Giton 2016: 51-53).
Measures of the Cuneiform Corpus
The basic definition of 'cuneiform', as the writing system has been known to European scholars since the seventeenth century CE CE (Pallis 1956: 18-27), is a type of writing produced by the pressing of a stylus into a surface of damp clay.This action produces wedge-shaped elements (cuneus is Latin for 'wedge') that are combined to form signs of varying degrees of complexity.Even if their appearance remains firmly tied to the physical properties of clay, cuneiform signs were used on wax writing-boards, incised on stone and metal surfaces, incorporated as a graphic element in frescoes and glazes, and even, though rarely, written using ink (for examples of the latter, see Finkel/Taylor 2015: 87).As a particular writing system, cuneiform is attested from around 3,200 BCE BCE until c. 80 CE CE, serving as the carrier of a wide range of Semitic, Indo-European, and isolate languages, including Sumerian, Eblaite, Akkadian, Hurrian, Urartean, Hittite, Luwian, Palaic, Hattic, Elamite, and Old Persian (augmented from Edzard 1976Edzard -1980: 545): 545).To these are added, due primarily to scholarly convention, the geographically and historically proximate Proto-Elamite and Linear Elamite scripts from western Iran, even though these constitute clearly separate systems of writing.Finally, one should add the transitional case of Ugaritic or alphabetic cuneiform from the Eastern Mediterranean coast, where a selection of cuneiform signs has been deployed in the writing of alphabetic values.
Quantitative surveys of this corpus are few and far between, even if its scale, richness, and diversity is widely acknowledged by specialists (Van De Mieroop 1999: 11).This lack may, to some extent, be ascribed to the rapid growth of the corpus from archaeological excavations during the first half of the twentieth century CE CE.Next to copies and casts, the nineteenth century initial deciphering and early study of cuneiform drew on a limited number of physical inscriptions acquired by European antiquarians and Western travellers to the Middle East.An inventory of these sources might, even at the turn of the century, still be presented with at least some reasonable claim to being exhaustive (Fossey 1904: 65-79).The onset of several large archaeological excavation projects around this time, accompanied by a surge in the acquisition of cuneiform texts from the antiquities market by museums and collectors, radically altered this state of affairs (Kramer 1962: 300-301).Just fifty years later, reviews of the research history of cuneiform studies could provide only approximate, if breathtaking, estimates of the number of inscriptions unearthed.The, by no means comprehensive, survey of the number of inscriptions from major sites summarised in Pallis' retrospective Antiquity of Iraq gives a total of close to 200,000 inscriptions (1956: 185-187; see also Schmökel 1955: 122 for a similar number).A cursory overview by Samuel N. Kramer, certainly one of the most pre-eminent cuneiform scholars of the twentieth century, provided largely similar figures, followed by a tentative estimate of the entire corpus as numbering perhaps half a million inscriptions (Kramer 1962: 299-301; for an earlier appearance of the same figure, see Neugebauer 1952).This estimate, cavalier in its inception as it may seem, remains widely cited to this day (see, for example, Roaf 1990: 14;Bottéro 2001: 22;Fink 2020: 137) In 2000, as a voluminous aside to a book review, the Egyptologist Carsten Peust assembled and evaluated basic numerical estimates of corpus size for a number of scripts from the ancient world in order to assess the relative strength of each corpus in terms of lexicographical authority.The basic metrical premise for Peust's discussion was the number of words in all unique compositions available from currently published texts (2000: 252-253).As such, his survey only indirectly touches on estimates of corpus size as derived from the number of discrete inscriptions, for which, as far as cuneiform was concerned, there was still little but Kramer's estimate of c. 500,000 records to quote.The results are illuminating and important, however, in that they embody a first attempt at quantitatively comparing a number of written corpora from the ancient world through the formal definition of basic variables and units of measurement.Drawing on this study, a seminal paper published by Michael P. Streck in 2010 offered a first detailed and formalised survey of the number of cuneiform inscriptions known from excavations and museum collections around the world.Providing an overall figure for the entire corpus hovering around 550,000 individual inscribed objects, fragments, and seals, this survey also included an estimated total number of words contained within the corpus, modelled on counts from select subsets of texts (Streck 2010).The estimated total of some 15 million words, if juxtaposed with the calculations presented by Peust, exceeds all other major corpora of the ancient world, except Greek.
Looking into the near future, open access digital resources are certain to become the principal authorities for qualified assessments of corpus size.In the case of cuneiform, the catalogue of the Cuneiform Digital Library Initiative (CDLI),6 the canonical index of cuneiform inscriptions worldwide, currently holds metadata records on some 350,000 unique inscriptions.Repositories devoted to specific spatial or temporal transects of the corpus may contain comparatively fewer records, but typically display higher degrees of consistency in terms of accuracy and detail-although a formal evaluation of data quality across digital repositories in cuneiform studies remains to be seen.The Database of Neo-Sumerian Texts (BDTNS),7 devoted to the administrative records of the twenty-first century BCE BCE Third Dynasty of Ur, now includes in excess of 100,000 records.ARCHIBAB,8 focusing on second millennium BCE BCE Syria and Iraq, has recently passed 35,000 records.The gradual consolidation of such repositories into a comprehensive index of all cuneiform inscriptions, and their further augmentation with more elaborate resources for spatial, chronological, and artefactual metadata will eventually provide students of cuneiform with an immense and unique empirical base for large-scale data analysis.As will be demonstrated in the following, digital resources still display considerable gaps and sample bias, and not only because troves of tablets remain locked away in museum basements and storerooms around the world.
Exploring Distributions of Writing
Considering the current state of digitisation of text catalogues and the curation of associated metadata collections, large-scale analyses of textual corpora from an archaeological perspective remain surprisingly rare.Where researchers have indeed attempted to explore digital conversions of inscription catalogue data into spatially sensitive studies of the ancient past, results have proven both promising and insightful.An illuminating approach employed the spatial distribution of finds of dated display inscriptions to detect spatial polity contraction in the eighth to tenth century CE CE Maya Lowlands (Ebert et al. 2012; also Kennett et al. 2012: 791), demonstrating the ability of the textual record to relay social currents in a spatial dimension.Research on epigraphic production within the Roman Empire has fostered several studies on spatial and chronological variation in the distribution of writing at the level of individual provinces or by comparing data from different parts of the empire (e.g., MacMullen 1982;Meyer 1990; Nawotka 2021).More recently, researchers have used a curated version of the catalogues of the Epigraphische Datenbank Heidelberg9 and the Epigraphik-Datenbank Clauss/Slaby,10 totalling more than 600,000 inscriptions, to evaluate corpus characteristics of Latin epigraphic writing from Western Europe over time and space.The resulting distributions demonstrate clear trends in the production and consumption of writing over a period of many centuries, including regionally specific spikes in epigraphic activity and changing prominences of public and private inscriptions from province to province (Heřmánková et al. 2021: 168-174).Furthermore, the combined analysis of two temporally and spatially overlapping data collections brings out sampling biases not clearly recognised otherwise (2021: 69-166).Such studies should serve as exemplary applications of the vast digital resources now available for analysis within a spatial environment.
Novel as such approaches may seem from a philological standpoint, they are commonplace in closely related disciplines.Studying spatial patterning of material culture has a long and distinguished track record as a means of understanding and analysing archaeological data in a geographical and relational frame (Hodder/ Orton 1976;Conolly/Lake 2006;Gillings et al. 2020).The ability to integrate and analyse vast amounts of digital data in a spatial dimension, so characteristic of approaches developed in archaeological research in the Middle East over the last couple of decades, offers ample demonstration of the potential insights that can be gained from a consideration of spatial patterns in the distribution of all types of material culture.The concurrent expansion of analytical breadth and depth, especially within a comparative frame of analysis, has spawned a wide range of studies on broader societal dynamics, reaching across eras and continents (Kintigh et al. 2014;Smith et al. 2012;Wright/Richards 2018).
Cuneiform inscriptions, when considered as manifestations of material culture, hold particular qualities that should make the relevance of broader, corpus-level analyses sensitive to diachronic and diatopic patterning abundantly clear.As the cuneiform script has not been in active use since c. 80 CE CE, and remained largely undeciphered until the mid-nineteenth century (Larsen 1996), the vast majority of cuneiform inscriptions belongs to an archaeological reality.They derive from stratified archaeological contexts on a par with potsherds, bones, and soil samples, and the same level of accuracy that can be assigned to such types of material can also, in theory, if not always in practice, be assigned to a cuneiform tablet.Such a statement is not made in ignorance of the drawbacks presented by a long history of looting and smuggling of cuneiform artefacts, nor the manifold problems of provenance assessment and archaeological contextualisation produced by this state of affairs.But it should be generally accepted by all that virtually all cuneiform inscriptions known to us derive from archaeological strata or standing monuments, from which they have been removed relatively recently.As with epigraphical corpora more generally, the preservation and discovery of cuneiform inscriptions is then subject in the first instance not to the whims of the archivist or librarian, but to the workings of the spade (or pick, rather).Even when dealing with the largely undocumented and entirely clandestine retrieval of inscriptions, this basic premise underlies the many, and often successful, efforts by researchers to follow inscriptions back to their archaeological origin (e.g., Tsouparopoulou 2017).Where a comprehensively documented archaeological context of a cuneiform find is available to us, the range of potential insights on the production and consumption of writing in the ancient world is vastly expanded.Courtesy of its preferred physical medium, the nature of inscriptions deposited and preserved is also special.Consider, for example, the excellent study by Sauvage (1995) on the dynamics of tablet production, deposition, destruction, and reuse at eighteenth century BCE BCE Ḫarādum, demonstrating how cuneiform archives may habitually include inscriptions that have survived a great many tribulations.The study of patterns in the deposition and discovery of cuneiform inscriptions in larger assemblages has been pioneered by Pedersén (1998;consider also van Soldt 1991), reflecting a close relationship with the past use context of writing, in archives, libraries, and similar settings.In all, the cuneiform corpus displays a fairly unique level of archaeological grounding, preservation abilities, and abundance and diversity, windows towards a deeper understanding of broader patterns of ancient writing that can be fruitfully exploited within a spatial dimension.
Despite their inherent ties to the archaeological record, more ambitious analyses of the spatial distribution of inscriptions have not previously been attempted in cuneiform studies, due to inconsistencies in geographical coverage and poor standardisation of data collections as far as the working georeferencing of inscribed finds is concerned (Rattenborg et al. 2021a).There is no questioning the great service done to the field by numerous digitisation initiatives over the last few decades, yet an evaluation of spatial coverage of current online catalogues will demonstrate noticeable gaps in resources available for large-scale analysis of corpus composition and distribution.This is a consequence of current emphases on curating, editing, analysing and disseminating cuneiform inscriptions in the digital sphere, being biased towards larger textual assemblages from major archaeological sites.There are tangible reasons for such priorities, to be sure.From a philological point of view, it is rather pointless to argue the superior importance of an assemblage of tens of thousands of inscriptions over an assemblage of one or two.Even so, the lack of a comprehensive understanding of the spatial distribution of cuneiform inscriptions will leave us poorly equipped to address questions relating to the overall prevalence of writing in the cuneiform world at a general level, and to extrapolate from there to more fundamental aspects of social history, such as literacy, recording, knowledge production, etc.In offering a comprehensive overview and provisional evaluation of the geographical distribution of cuneiform finds, this study offers a first step towards spatially sensitive approaches to the cuneiform corpus.
Methodology
Our approach is founded on the integration and evaluation of two related data sets, both assembled as part of Geomapping Landscapes of Writing (GLoW), a research project hosted by the Department of Linguistics and Philology of Uppsala University and funded by Riksbankens Jubileumsfond (grant number MXM19-1160:1).The first comprises a geospatial index of locations with reported finds of cuneiform inscriptions at site level, counting al-most 600 records spread across the wider Middle East and adjoining regions.The second constitutes the data register of a related survey of secondary literature, which provides an estimate of all inscriptions found at each archaeological locale for an aggregate total of c. 430,000 inscribed artefacts.When combined, these data sets then provide a basic, quantifiable and global overview of archaeological finds of cuneiform inscriptions from all periods of the history of the script.Below, we review the formal conventions guiding the assembly of the respective resources, as well as some characteristics of the acquired data relevant for our analysis.
Geospatial Index
The geolocation of sites with known finds of cuneiform inscriptions is retrieved using the most recent version (v.1.6, 1 July 2023) of the Cuneiform Inscriptions Geographical Site (CIGS) index (Rattenborg et al. 2021b).This resource provides point vector centroids and associated attribute data for close to 600 discrete locations across Europe, Africa, and Asia where cuneiform has been found (Fig. 1).At the time of writing, the CIGS index constitutes the most comprehensive geographical overview of the corpus available, more than doubling the number of geolocated records retrievable from the provenience index of the CDLI and the Pleiades data subsets deployed by various Open Richly Annotated Cuneiform Corpus (ORACC) 11 projects, for example Ancient Records of Middle Eastern Polities (ARMEP). 12Past and current versions of the data set are deposited with the Zenodo 13 research data repository maintained by Europe OpenAIRE and available for reuse under a CC-BY 4.0 licence. 14A more in-depth description of the data set, as well as suggestions for its application in research and data visualisation, has been provided elsewhere (Rattenborg et al. 2021a), and we will only review specific variables employed in the following analyses in more detail here.Individual archaeological sites mentioned in the following are followed by their three-letter CIGS identifier, given in parentheses.Records included in CIGS refer to geographical locations with finds of cuneiform inscriptions.Locations have been established from the availability in printed sources or digital catalogues of any inscribed artefact carrying a catalogue identifier, for example, an excavation or museum number, regardless of the extent or detail of associated metadata.Aspiring to a high level of empirical transparency, the index does not include locations with reported finds of cuneiform, but without discretely documented artefacts.Thus, purported bricks with inscriptions 'in the cuneiform character' reported from Balḫ and Farah in Afghanistan in the mid-nineteenth century (Ferrier 1856: 207 and 393-394) are not included, as the account provides no identification of any one unique inscribed artefact.Included, in contrast, are discrete, if only provisionally documented inscriptions, e.g., an illustration and preliminary translation of an inscribed brick of Sîn-aḫḫē-erība (Sennacherib) from Qalʿat ʿAwayna (AWA) in northern Iraq (Layard 1853: 225-226;Furlani 1934: 125).
The index does not include proveniences defined on the basis of a conceptual historical, but geographically undefined place alone.Though many provenience indices available from online digital text catalogues within the disciplinary domain make little distinction between a historical place and a geographical location, their verification is subject to very different criteria, and should be considered entirely different entities. 15As historical places are relational and geographical locations are physical entities, they are also not fully compatible in analytical terms.In the present study, for example, we augment the publicly available version of the index with polygon vector data for proveniences that can be traced on satellite imagery, to establish a basic variable for surface extent of places where cuneiform has been found.Generating this type of spatial information for a historical place without a known physical location is not possible.Some cases where the association of a historical toponym and an archaeological feature may still be debated, but with a relatively high degree of certainty as to the whereabouts of the provenience on a more general level, are included, for example in the inclusion of ancient Garšana (GRS) as modern Tall Baridīya (Molina/Steinkeller 2017).
Finally, the index does not reflect knowledge of the primary or secondary historical context of an inscribed artefact, only its archaeological origin.As such, the likely origin of the Law Stele of Ḫammurabi, namely Sippar (SAP) in southern Iraq, will not be included, only its place of archaeological discovery, namely the city of Šušan/ Susa (SUS) in southwestern Iran, where it was taken by a triumphant Elamite army in the fourteenth century BCE BCE (Roth 1997: 73).At a more mundane level, the index will also not seek to qualify finds of inscribed bricks or stone fragments that may theoretically have been brought to their archaeological origin in a more recent era as building material (e.g.finds of baked bricks at small sites in the Iraqi alluvium, cf.Adams/Nissen 1972: 217; or the inclusion of a royal stele from the Sîn Temple in Ḫarrān (HAR) in the building of a town house at nearby Eski Harran (EHA), cf.Pognon 1908: 1-14; Rice 1957: 469).
Assemblage Estimates
We join the CIGS geospatial index with a tabular data set providing overall assemblage estimates, or the total number of cuneiform inscriptions derived from individual archaeological locations, referred to here as CIGS-AE (Smidt et al. 2023).This data set has been compiled by Gustav Ryberg Smidt from 2020 to 2021, with subsequent updates by GLoW project staff based on additions to the working version of the CIGS index.The version employed here complements records included in CIGS version 1.6 (1 July 2023), thus providing estimates for an overall c. 430,000 inscriptions and fragments of inscriptions reported from close to 600 archaeological locations, as well as bibliographical references for these figures.This data set, including a complete bibliographical index of sources cited, is deposited with Zenodo and available for reuse under a CC-BY 4.0 licence.
In building this data set, we have taken as our basic entity of study the material, inscribed artefact as it exists archaeologically.This means that scholarly estimates of, or attempts at assigning individual fragments to, a notion of an original document have been ignored to maintain a uniform data set.Very few reliable estimates of the original number of documents from any one archaeological location are available from the general literature, and the vast majority of reports offer no information on such matters.Readers are encouraged to consider discussions of the ratio of complete documents to inscribed fragments in various regions and periods (Archi 1986: 78-79;Zimansky 2004: 316), but extrapolating from such figures is much too speculative an exercise to be pursued here (as noted also in related studies, e.g., Pedersén 1998: 6).This will also become evident from the discrepancy observed between estimated totals found in the literature-and included in the present survey-and corresponding numbers retrieved from digital catalogues concerning some major assemblages, for example the c. 11,000 records reported for Ebla (EBA) (Archi 1997: 184) as opposed to the more than 3,000 edited tablets currently included in the Ebla Digital Archives. 16The former figure includes a substantial number of fragments that will eventually dissolve into a smaller number of discrete inscriptions (Scarpa 2021: 3).
As we are concerned with the material, inscribed artefact that exists in the archaeological record, assemblage estimates will consider only the number of artefacts excavated (scientifically or clandestinely) at a given site, not the number of artefacts that can be expected to be retrieved from a given site (cf.Peust 2000: 252).It is, theoretically at least, possible to roughly calculate the number of inscribed bricks from Čoġā Zanbīl (COZ) (Ghirshman 1966: 13) or the number of inscribed ashlars included in the construction of Sîn-aḫḫē-erība's aqueduct at Ǧarwāna/Jerwan (JRW) (Jacobsen/Lloyd 1935: 19-27), but applying such conventions across the entire data set is not feasible.Even when referring only to the number of inscriptions found at any one site, numbers are bound to fluctuate considerably.Where multiple and significantly diverging estimates are available from the secondary literature, we have to the extent possible sought to evaluate individual estimates in light of the pertinent research history and the empirical basis of the figures suggested.For a great many assemblages, the figure given is not contested.For others, let us take Umma (JOK) as an example, aggregate numbers of inscriptions listed in encyclopaedic site biographies (Waetzoldt 2014(Waetzoldt -2016) ) may deviate from the sum of records contained in relevant digital catalogues (Molina 2008: 52 and more recent counts from BDTNS).To maintain formal data collection consistency, we have generally sought to give preference to figures provided in peer-reviewed print publications where these did not significantly disagree with numbers available from dynamic digital resources.In the few cases, as with Šušan/Susa (SUS), for example, where no updated printed survey of inscription finds is available, we have relied on aggregate figures from digital catalogues, accepting that these are dynamic resources that are unlikely to present an exhaustive overview of inscriptions known from a single site at the present stage.With all of these caveats duly noted, individual data points in the resulting index are certain to draw comments from experts more familiar with their particular characteristics than the present resource may aspire to be.
Analyses
Applying and linking these two data sets allows us to review the relative scale of cuneiform finds at site level and the geographical distribution of cuneiform finds with reference to location, site size, and assemblage size.At a more general level, the assembled resources will also enable a consideration of certain characteristics of spatial density of find-spots and the number of inscriptions found.This approach provides a basic statistical overview, in a quantitative as well as spatial sense, against which we may evaluate current atti-tudes towards the number and prevalence of cuneiform inscriptions as they appear in the archaeological record.The provisional nature of this exercise should be emphasised, however.As our quantitative data is based on estimates of the overall number of cuneiform inscriptions retrieved from individual archaeological sites, it follows that significant numbers of the artefacts encompassed by these figures have not been subjected to proper metadata indexing, much less edited and published.Accordingly, the data sets presented here hold no chronological dimension, nor will they offer any information on distributions of material, script, language, or genre.With these limitations in mind, we may turn to consider aspects of data quality, accuracy, and coverage of the data employed, prior to discussing statistical impressions arising from their analysis.
Spatial Accuracy and Certainty
As noted elsewhere (Rattenborg et al. 2021a), records included in the CIGS index are assigned a level of locational accuracy reflecting the degree of certainty with which their position can be established.Individual accuracy levels are listed and described in the table below (Table 1).The highest, level 3, indicates an archaeological feature, typically a settlement mound, that can be accurately traced from satellite imagery, and for which the corresponding point vector contained in the index has been derived from an associated polygon vector drawn around the archaeological feature.The next, level 2, is a representative point without a defined surface extent, typically a submerged site, for example Tall Bazmusian (BZM) now inundated by the Dūkan Lake, or the known location of a rock inscription like Ganǧnāma (GJN) in the Iranian Zagros, which can, however, not be drawn.The third, level 1, is tentative, indicating a horizontal margin of error of up to around 1,000 m, including the approximate find-spot of a tablet in a certain field, for example at Hasankeyf (HSK) in central Turkey, or the approximate location of a poorly documented rock inscription.The fourth, level 0, indicates that the record in question relates to a discrete archaeological site, but that the geographical location of this site is not known with any meaningful degree of precision.These typically appear in museum registers, and ultimately derive from antiquities dealers, e.g.Zaʿala (ZAA), a mound located somewhere on the southern outskirts of modern Baġdād (Reade 1987).It should be emphasised that point vector data for certain and representative locations are largely commensurable in terms of the level of locational certainty implied.They differ mainly in the way in which they have been defined, the former automatic, the latter manually.As such, close to seventy-five per cent of records contained in the data set can be considered accurately located.If turning to the distribution of cuneiform inscriptions across these categories as derived from the CIGS-AE survey, certain locations account for 99.2 per cent of all inscriptions included in the data set, with around 0.4 per cent of inscriptions deriving from representative and tentative locations respectively.Locational accuracy is impacted by the relative prominence of certain types of finds, which will become evident when comparing distributions for the four modern countries with which most records in the data set are associated (Fig. 2).The higher numbers of individual, open-air inscriptions (more on which below), which are typically poorly recorded as far as accurate information on their archaeological origin is concerned, cer-tainly impact distributions for Turkey and Iran.A comparatively higher number of finds from much more easily delineated settlement mounds are present in distributions from Syria and Iraq.
Levels of Coverage
The CIGS geospatial index and the associated assemblage estimates contained in the CIGS-AE file are the results of a thorough survey of specialist literature and online data collections conducted over the course of three years.To review and contextualise levels of global coverage of these indices relative to the data collections of existing digital repositories in the field, the two maps presented below (Fig. 3) compare the geographical distribution of records in the CDLI catalogue-for which an archaeological provenience can be either securely or tentatively established-with the distribution of estimates contained in our survey.Inscription counts from the former resource are drawn from a catalogue dump dated 8 August 2020, reflecting the state of the CDLI catalogue prior to the initiation of data sharing with the GLoW project, which is still ongoing.As will be readily apparent, the former resource exhibits considerable gaps in coverage and comprehensiveness when compared to that of the CIGS and CIGS-AE indices.The divergence applies primarily to smaller finds from peripheral areas, as the latter includes a large number of archaeological locations with finds of one or a couple of cuneiform inscriptions.As previously noted, there are reasonable explanations for such a bias, but it nevertheless remains an important factor to consider when addressing research questions contingent upon geographical distribution.In addition to the exhaustive geographical coverage of the CIGS index, it is worth noting here the relative agreement of overall estimates of corpus size available from the CIGS-AE index and related surveys, namely the corpus overviews assembled by Pallis (1956: 185-187), Kramer (1962: 299-301), andStreck (2010).As noted above, the aggregate totals of approximately 200,000 inscriptions suggested by Pallis and the estimated c. 500,000 inscriptions proposed by Kramer find further confirmation in the much more detailed overview provided by Streck, which arrives at a total of 530,000 inscriptions and fragments.It is worth emphasising here that the CIGS-AE index has been assembled based on estimated totals from the scientific literature, rather than catalogued artefacts, meaning that while this data set ignores unprovenanced inscriptions found in public and private collections around the globe, the figures provided will include, in theory at least, uncatalogued artefacts from archaeological sites.
The Geographical Extent of the Cuneiform Corpus
Let us consider the basic spatial statistics of the assembled data set.Finds of inscriptions in cuneiform or related scripts included in the data set are found across Europe, Asia, and Africa, and the territories of 24 modern nations (Fig. 4).17While the vast majority of the locations included here are situated within Iraq, Turkey, Iran, and Syria, peripheral finds extend over a much wider geographical zone.Cuneiform occurs across an area reaching from the Central Mediterranean to Eastern Afghanistan, and from the steppes of Romania and Southern Russia to the Libyan and the Arabian Desert.A distance of roughly 5,000 km separates the westernmost find, an inscribed bronze vessel discovered in a seventh century BCE BCE tomb in Falerii (FLO) in Central Italy (Cristofani/Fronzaroli 1971), from the easternmost find, an Achaemenid silver piece bearing parts of an Elamite inscription found in a coin hoard in a Kābūl suburb (KBL) in eastern Afghanistan (Curiel/ Schlumberger 1953: 41 and III 12;also Hulin 1954).There are close to 3,000 km between the northernmost find, an inscribed alabastron of Ŗtaxšaça (Artaxerxes) I found at Novyy Kumak (NKM) on the Ural River (Trejster 2012), to the southernmost, a small stone portable with an Achaemenid inscription from Idfū in Upper Egypt (Michaélidis 1943: 96-97).
This geographical distribution certainly underscores the pre-eminence of Iraq in the study of cuneiform cultures.More than two thirds of all inscriptions, or upwards of 300,000 of the records included here, were found in Iraq, dwarfing the many tens of thousands reported from archaeological sites in neighbouring Syria, Turkey, and Iran.Together, these four countries account for some eighty per cent of all known locations with cuneiform finds, and more than ninety-nine per cent of all inscriptions (Fig. 5).In quantitative terms, finds from other countries are many times smaller, and often attributable to haphazard discoveries or a limited range of artefact types.The c. 400 inscriptions reported for Egypt, in fifth place, for example, derive almost exclusively from the famous Late Bronze Age correspondence of al-ʿAmārna (AKH).The c. 200 inscriptions from Armenia, in sixth place, are predominantly display inscriptions, for example, stelae, rock faces, and metal implements.But then again, a comparatively high number of finds of small numbers of inscriptions across archaeological sites in Israel, Palestine, and Lebanon seems to replicate assemblages found in Syria and Iraq, though on a much more modest scale.Looking beyond the geographical outline and primary concentrations of cuneiform inscriptions, liminal cases turn up even further afield.Three fragmentary display inscriptions found in some sixteenth century housing foundations in central London in 1890 (Fig. 6) probably reached Great Britain as ballast of a merchant vessel or the occasional souvenirs of a curious traveller (Evetts 1891), and well illustrate the possible reach of the corpus prior to the initiation of large-scale trade in cultural artefacts in the late nineteenth century.A host of debated strays can also be found in the literature.An inscribed Old Babylonian cylinder seal-used as an amulet in more recent times-held in the collections of Nagpur Central Museum (Suboor 1914), has been suggested, without further evidence, to derive from somewhere on the central Indian subcontinent (Lal 1953: 101).Incisions on a silver piece from Mōhenǧō Dārō / Mohenjo Daro originally suggested to be cuneiform characters, and occasionally presented as such in the literature (Kosambi 1941: 395-398;see, e.g., Dhavalikar 1975;Goyal 1999: 130), have not been confirmed by specialists (Marshall 1931: 519).The reported mid-twentieth century discovery of an inscribed hexagonal cylinder seal from a Roman legionary camp on the Austrian Danube (Swoboda 1964: 275) has since been revised, as the purported inscription is unconvincing (see Dembski 2005: pl. 130).To these may be added a host of peculiar finds from Western Europe and North America (see discussion with further references in Finkel 1983), all generally interpreted as the misplaced findings of Western antiquarian interests of the late 18 th and early 19 th centuries.While such a stance seems entirely logical for most cases, provenance histories are far from always easy to disentangle (for a colourful example, see Rattenborg 2023).
Inscriptions found in a clearly secondary context, but-theoretically, at least-in relative proximity to their origins, have been maintained in the current data set and are worth noting, the more so because they demonstrate an aspect of the history of cuneiform writing that has received surprisingly little attention in the literature, namely the keeping of cuneiform inscriptions as relics of local communities in more recent history (see Verderame 2020: 228 and note 63 for a raising of similar points).A most illustrative example here are the Babylonian display inscriptions incorporated into the early construction of the great mosque in Ḫarrān (HAR) in southern Turkey.Here, three large stelae of Nabû-nāʾid/Nabonid (r.556-539 BCE BCE), ostensibly taken from the much-famed temple to the Babylonian moon god Sîn, were intentionally embedded face-down in the pavement at each of the three main gateways leading into the mosque courtyard (Rice 1957: 468-469;Gadd 1958: 35), illustrating an intriguing dialogue between past and present systems of belief.The use of cuneiform inscriptions as relics in a religious setting is seen also on Baḥrain, namely the so-called 'Durand Stone' acquired by British explorers in the eighteenth century, which was embedded in the inner sanctuary of a madrassa in the town of Bilād al-Qadīm (BLQ) in the northern part of the island (Durand/Rawlinson 1880: 193-194).A similar example comes from the fifteenth century tomb of Šāh Nimatullāh in the town of Māhān (MHN) in central Iran, where a ceremonial stone weight produced during the reign of Dārayavauš/Darius the Great (r.522-486 BCE BCE) was first reported in the late 1850s (de Gobineau 1864: 323; also Weissbach 1910: 481).Many more possible examples of such engagements with the material remains of the past are on display in the highlands of eastern Turkey and Armenia, where fragments of a substantial number of Urartean display inscriptions have been found incorporated into the buildings of mosques and churches (examples are too many to mention here, but consider, e.g., Schulz 1840: 299 no. 38;examples in Salvini 2008: 55-64).Although the precise context of their inclusion in such structures is not always possible to re-enact, the often prominent placement of inscribed pieces seems too regular to be a mere coincidence.Looking beyond inscriptions with an affixed archaeological provenience, inscriptions transformed into portable amulets also occur and accentuate comparable nodes of memory and veneration.A particularly illustrative example is the unprovenanced twenty-second century BCE BCE stone foundation plaque of Gudea carrying a much later ʿUmayyad incantation in Kūfī script (George 2011: 19-20 and pl. XI).The use of engraved and/or inscribed cylinder seals (including examples from India and Austria, cf. this section, above) may equally well be seen as a conscious engagement with relics of the past.The engagement of later local tradition with cuneiform inscriptions, regardless of whether these could be read or not, closely resembles popular veneration of inscribed items by village communities in Medieval Western Europe (Moreland 2001).While outside the scope of the present study, these brief points should serve as a manifest reminder of the cultural meaning of these inscriptions also for more recent inhabitants of the Middle East.The veneration bestowed upon such items historically adds a further facet to discussions of the diaspora of inscribed artefacts now found in museums around the world.
Assemblage Size Distribution
Whereas the above concerns the discovery of cuneiform inscriptions without reference to their number, the distribution of assemblage sizes, namely the overall number of inscriptions found at any one archaeological site, adds further nuance to the spatial characteristics of the corpus.Examining such distributions necessitates qualification, however.As previously noted, the data set presented here includes no chronological dimension, and as such reflects an archaeological reality, not a historical record.The number of inscriptions retrieved from any one archaeological site may represent the aggregate of just one or a multitude of specific historical events, and be the product of archaeological excavation or illicit looting conducted over anything from an afternoon to several generations.As a provisional overview, assemblage distributions do, however, bring out certain tangible patterns (consider related visualisations appearing in print publications, e.g., Postgate 1994: Fig. 2:12; Van De Mieroop 1999: 12; Sauvage 2020: 2) To simplify the data, we employ a six-tiered binning of assemblage estimates included in the data set, as summarised in the present table (Table 2).When plotted (Fig. 7), it will be seen that a larger part of assemblages, in excess of sixty-five per cent, concern singular or a handful of inscriptions.Typical such examples include openair inscriptions and crafted artefacts, but our impression is that quite a significant number of assemblages also concern clay artefacts, namely tablets or sealings, relating to everyday acts of storing and transmitting communication in writing.The remaining thirty-five per cent of the data set exhibit a rather steady decline in the numbers of individual assemblages as the number of inscriptions attributed to each assemblage increases.This would suggest that the spatial prevalence of cuneiform writing is perhaps less appreciated than should be the case, a point that can be further explored with reference to the size of archaeological proveniences (more on which below).Such a bias is certainly hinted at in our review of the coverage and consistency of the CDLI catalogue previously discussed.A total of only sixteen archaeological sites have reported finds of more than 5,000 inscriptions each, including such well-known locales as Ĝirsu (GIR), Ḫattuša (HAT), Ninuwa/Nineveh (NNV), Nippur (NIP), and Umma (JOK) in the range of 30,000-40,000 artefacts each, and Kaneš (KNS), Mari (MAR), Uruk (URU), Pārsa/Persepolis (PRS), Tall ad-Duraihim/Drēhem (DRE), Ur (URI), Bābili/Babylon (BAB), and Ebla (EBA) in the range of 25,000-10,000 each.Tall Abū Ḥabba (SAP) southwest of modern Baġdād/Baghdad is the only site within the current survey to exceed 50,000 reported inscriptions.Even if the cumulative number of inscriptions from such immense assemblages accounts for the majority of finds of cuneiform artefacts, it does obscure a much more diverse distribution of writing in geographical terms.We can test the degree to which the relative prominence of different assemblage sizes as well as their distribution is entirely arbitrary by grouping subsets of records according to modern country (Fig. 8).Preliminary as it may be, this will enable us to sample the data using a variable which is at least partially sensitive to varying intensities of archaeological field research and cataloguing within different national territories (consider Hodder/Orton 1976: 20-29 for related applications).The resulting graph is interesting, as far as the overall trend in assemblage size distribution across four modern countries is concerned.The correlation between Turkey and Iran on the one hand, and Syria and Iraq on the other, would appear an expected outcome if considering the general prominence of singular display inscriptions, typically in stone, in the former areas against the more prevalent finds of cuneiform tablets, which tend to show up in larger numbers, in the latter.More generally, all subsets suggest a relatively consistent decline in the number of assemblages as the number of inscriptions of the individual assemblage goes up.
Surface Extent of Locations
Another angle provided by the presented data sets focuses on the relationship between writing and settlement size.The close association of writing and urbanism is a firmly established trope of many studies on cuneiform culture throughout the history of the script (see, for example, Van De Mieroop 1997: 215-226;Liverani 2013: 73-80).While such a dialectic may be relevant at a broader theoretical level (consider the enduring power of Childe 1950), it may also obscure a more divergent empirical reality.With regards to the present data sets, we can evaluate the relationship using polygon vector data giving the outline, or surface extent, of archaeological features as drawn from high-resolution satellite imagery (Rattenborg et al. 2021a).These are available for a little more than half of the locations included in the CIGS v.1.6index, namely all of those with a certain degree of locational accuracy (Table 3).A number of caveats relating to the overall authority and quality of this data should be noted in advance.The resulting vector data will provide only a rough approximation of site size and is likely to deviate considerably from results using more fine-grained methods of remote sensing and groundbased site survey (see for example Wilkinson et al. 2006;Casana 2020).To the extent possible, optical delineation has been guided by available site reports, but even so may overlook elements not visible to the human eye that would, if included, alter the established surface extent of the site.Furthermore, the defined surface extent will relate to the site as an archaeological feature, not to the extent of any one historical settlement, which is, in many cases, known to have changed considerably from one historical period to another.Without a notion of the general distribution of site sizes that have seen excavation, our ability to assess the regularity of the resulting distribution is severely limited.Juxtaposing the current data set with settlement hierarchies as defined through archaeological survey and research runs up against changing currents of social organisation over time and space, meaning that a more finely attuned evaluation of settlement hierarchies and the production and consumption of writing would require much more extensive and fine-grained data than what is currently available.As in the preceding section, we are reviewing an archaeological, rather than a historical, reality.As archaeological features go, the resulting subset covers a wide variety of site types (Fig. 9).While some only partially exposed features sit at 1,000-2,000 square metres, the smallest, fully delineable sites included in our data set extend over less than 0.5 hectares.Examples of the latter include the predictable remains of isolated monumental structures, such as the monumental gate discovered at Tul-i Āǧuri (JOR) just west of Pārsa/Persepolis (Basello 2017), the aqueduct at Ǧarwāna/Jerwan (JRW) in northern Iraq (Jacobsen/Lloyd 1935, 19-27) and similar features at Nigūb (NGB) (Davey 1985) and Ḫinis/Bāwiān (BVI).But there are also minuscule mounds with finds of inscriptions indicative of everyday practices of accounting or communication, for example Mezraa Teleilat (MZT) (MacGinnis 2018: 224) on the Turkish Euphrates, Tall al-Šiyūḫ Fawqānī (BUR) further downstream in Syria (Fales et al. 2005), and Tall Dāmiyā in the central Jordan Valley (Petit/Kafafi 2016, 24-25), all of which extend over one or two hectares at best.Quite substantial bodies of inscriptions, in the range of a hundred and up to several thousand, are retrieved from tightly packed settlements and modestly sized mounds.The former include, for example, Tall Ḥarmal (SDP) in south-eastern Baġdād (van Koppen 2006-2008: 488) and Ḫirbat al-Dinīya (HRD) on the Middle Euphrates (Joannès 2006) both at around two hectares.The latter comprise, for example, Tall Abū ʾAntīq (ANT), east of al-Naǧaf in southern Iraq, seemingly at a maximum of five hectares (Fahad 2019), as well as Tall Imlīḥiya (MLH) on the lower Diyāla, extending over perhaps 6 ha (Boehmer/Dämmer 1985: 3-5).The largest sites included are massive urban agglomerations, extending over 500-1,000 hectares, counting well-known metropoleis, such as Ninuwa/Nineveh (NNV), Kār-Tukultī-Ninurta (KTN), and Uruk (URU), with assemblages spanning a multitude of discrete finds, archival contexts, and periods.The renown of the latter cases notwithstanding, the overall picture provided by our data, as far as site size goes, suggests a noteworthy prevalence of cuneiform inscriptions also at more modest archaeological locales.In all, around fifty per cent of the c. 300 delineable sites included in this sample constitute features less than ten hectares in extent, thus in the range of hamlets and up to modestly sized towns.Issues of sample reliability, especially as these apply to the accurate definition of surface extent from commercial satellite imagery, should be duly considered here, as should the expected discrepancy between the extent of an archaeological site and that of any given historical settlement that it may represent.On the other hand, the traditional interests-of archaeologists as well as looters-in prominent and larger archaeological features is an equally relevant factor to include (Pedersén 1998: 240;Matthews 2003: 158).Based on the present subset, finds of inscriptions are certainly not confined to major urban sites.Quite the contrary, cuneiform is found at all types of settlements.The vast majority of the inscriptions available to us remain tied to major archaeological locales, however, as seen if plotting the number of inscriptions according to surface extent.Archaeological features with a surface extent in excess of 150 hectares account for twenty-five per cent of all cuneiform inscriptions.Eighty per cent of all inscriptions are found at sites exceeding fifty hectares in surface extent (reflected also in the distribution of discrete archives and libraries, cf.Pedersén 1998: 238-241).In sheer numbers, only a relatively modest 80,000 inscriptions derive from smaller sites.All of these examples underscore a more general-and relatively obvious-statistical truth, namely that there is no straightforward relationship between the size of an archaeological site and the number of inscriptions retrieved from it.This lack of correlation carries another important implication, however: if a small site can yield many hundreds and thousands of tablets, and a larger site only a few, it follows that the inclusion of even very modest finds of cuneiform inscriptions is critical to the proper evaluation of the overall spatial prevalence of the corpus.The geographical distribution of inscriptions is an indication of prevalence, but the number of inscriptions not necessarily a proxy of peripheral or core areas of ancient writing culture.As a widely acknowledged example of this reasoning, the discovery of the immense archives of twenty-fourth century BCE BCE Tall Mardīkh (EBA), ancient Ebla, in the early 1970s is still held out as a potent reminder of how singular archaeological discoveries may force a complete redrawing of established notions of central and peripheral areas of writing cultures (Akkermans and Schwartz 2003: 235).
Spatial Density and Prevalence
These notions of centrality can be readily appreciated through an examination of the spatial density of the cuneiform corpus.Here, we employ kernel density estimations as a means to visualise general concentrations of cuneiform inscriptions across the Middle East.This allows us to qualify the number of inscriptions found in any one place against the number and concentration of places where inscriptions are discovered.Kernel density estimation is a common means of representing first-order intensity of point distributions in archaeology, and can be rewardingly applied to the present data set (Bevan 2020).
A kernel density estimation of inscription density based on the present data sets employs spatial proximity as the basic variable and the estimated number of inscriptions from a given location as a weight parameter in spatial statistical calculations.This implies that the statistical impact of a location will correspond to the number of other locations proximal to this location and to the scale of the assemblage of inscriptions found at this particular location.The result will rehearse visions of the overall distribution of cuneiform inscriptions familiar to specialist and layperson alike.The map below highlights the vast assemblages of cuneiform inscriptions retrieved from established centres of the cuneiform world, including the capital regions around Boğazkale in central Turkey, the Assyrian capital cities of northern Iraq, the Babylonian heartland between Baġdād and Baṣra, and the Achaemenid imperial seats of Pārsa/Persepolis and Pāṯragadā/Pasargadae.The geographical emphasis sketched here will also be rehearsed if mapping the holdings of most digital catalogues of cuneiform inscriptions currently available.When disregarding the number of cuneiform inscriptions found at any given location, an approach warranted by our previous discussion of the lack of correlation between assemblage size and site surface extent, a rather different picture emerges.Here, we deploy the same set of point locations using the same parameters for spatial calculation, but disregard the weight of the assemblage, meaning that all points will exert the same weight relative to each other.Spatial density will thus be defined from the proximity of locations alone.The resulting distribution still highlights the notional core areas of the cuneiform world, but at a more general level, the emphasis is shifted to the intensity of archaeological field research.Outside of the heartlands of Babylonia and Assyria, the alluvial plain between Baġdād and Baṣra, and the city and hinterland of Mawṣil/Mosul respectively, noticeable concentrations are found in areas that have, for a variety of reasons, seen intensive archaeological research within the last half century or so.Such include, e.g., the salvage excavation programmes preceding the construction of the Mosul Dam in northern Iraq (State Board of Antiquities and Heritage of Iraq 1986; Roaf 1997; Simi/Sconzo 2020) and the Tabqa Dam in central Syria (McClellan 1997), as well as the many excavations conducted in the headwaters of the Ḫābūr in north-eastern Syria in the decades preceding the outbreak of the Syrian Civil War in 2011 (Akkermans/Schwartz 2003: 10-12).Even more interesting is the equally dense cluster found in Israel and the Palestinian Territories, areas that have historically seen markedly higher levels of archaeological research and recording when compared to neighbouring countries (Mazar 1997;Greenberg/Keinan 2007;Lash et al. 2020).It is hardly surprising that a higher frequency of archaeological field research is likely to produce more finds, but the broader implications seem less widely acknowledged.Statistically speaking, we should not be all that surprised if larger assemblages of cuneiform inscriptions were still to be discovered in areas that continue to be considered largely peripheral in general overviews.The palatial archives from Qaṭna/ Tall al-Mušrifa (QTN), on the Syrian Orontes, are a case in point (Richter/Lange 2012), as are the scatters of inscriptions from Qadeš/Tall al-Nabī Mandū (QDS) in the northern Biqāʿ Valley (Millard 2010) and Ḥȧṣor (HAZ) in northern Israel (Horowitz et al. 2018: 63-88).
Further Perspectives
The present study is based on an initial survey of the secondary literature, intended as a starting point for a more thorough and comprehensive cataloguing of cuneiform inscription metadata.The merit of the data set presented, in consequence, is one of perspective and extent, rather than detail and erudition.Provisional as it may be, the preceding sections have demonstrated the potential of such resources for developing novel perspectives on a corpus of writing long considered beyond measure (e.g., Gelb 1967: 3).Introducing elements of inscription metadata, including chronological placement, material composition, artefact type, language of inscription, and inscription genre to such analyses are certain to produce immeasurably denser overviews of the compositional trajectories of the corpus over time and space (for a concise exploratory example, see Nett et al. in press).
Such approaches may reach well beyond descriptive statistics, for example by exposing tangible preferences in the production and consumption of writing in early human history.The number of individual display inscriptions found within the Urartean cultural sphere seems, on present evidence, of such outsized proportions compared to the corresponding type of finds from the Assyrian heartland as to suggest diverging habits of epigraphical production rather than diverging levels of archaeological fieldwork to be at play.Should we be compelled to ask if the emplacement of writing within the landscape was considered a more prestigious undertaking in the former than it was in the latter?Or one might consider why, next to myriads of recovered impressions of inscribed seals and stamped bricks, that the impression of cuneiform inscriptions on pottery vessels is such an exceedingly rare occurrence?Identifying and addressing such broader questions are contingent upon large, comprehensive, and increasingly standardised collections of data being made available for analysis, an effort that this study hopes to contribute to.
Previous surveys of the cuneiform corpus have approached questions of corpus scale and diversity primarily from a philological angle, assembling statistical overviews with reference to known major finds and collections of cuneiform inscriptions, but without explicit or exhaustive attention to geographical distribution.Coupled with a notable bias towards larger assemblages of inscriptions observable in current digital catalogues, such perspectives have overemphasised larger finds of written material at the expense of minor discoveries which do, so we hope to have demonstrated, carry considerable weight in statistically oriented assessments of the wider regional prevalence of the script.As such, accepted notions of centrality-where cuneiform is found and where not-may be more rewardingly considered with reference to multiple parameters.The broader applicability of metadata collections suitable for large-scale computational analysis is their ability to alter and refine the basic interpretational framework of a given set of research questions.
At a more philosophical level, the presented survey touches on a much more contentious issue, namely the question of sample authority.Plenty of anecdotal discussions of the serendipity of discovery, the chance encounter of a specific inscription, and the particular and unique nature of written sources as an object of study will all consider the possibility of a statistically significant historical record from the cuneiform world a fleeting mirage at best.The absence of larger assemblages of cuneiform inscriptions in the highlands of Turkey, Armenia, and Iran, for example, has been attributed to the haphazard nature of archaeological field work (Wilhelm 2008: 105-106).Reviews of the particular histories of deposition of archives and libraries have been leveraged to argue for an entirely accidental corpus being handed down to us (Millard 2005: 306-314).And yet, the sheer scale of the corpus discussed here should give reason to pause.Assessments of the number of Latin epigraphical remains have, very tentatively, suggested survival rates in the range of one to five per cent of inscriptions produced for some regions of the Roman Empire (Heřmánková et al. 2021: 14 with further references).The number of Greek epigraphic inscriptions preserved from Classical Athens seems to represent a considerable part of actual epigraphical production, even if researchers have abstained from giving exact figures (Hedrick 1999: 390-395).Dealing with a relatively narrow selection of materials, artefact types, and textual genres, the scale of such corpora provides some basic illustration of the magnitude with which writing may have been produced and consumed in antiquity.
To the knowledge of the present authors, no systematic attempts have been made at quantitatively assessing the production or consumption of writing in the cuneiform world over any given period of time or geographical area.Despite the easy quantitative juxtaposition of various corpora of writing advanced earlier, comparing numbers across writing cultures is hardly without problems when considering the character and composition of the individual corpora.The relatively equal numerical size of Latin and cuneiform corpora, for example, overlooks the almost inverted nature of their relative composition with respect to material, artefact type, and genre.Close to ninety-five per cent of the cuneiform corpus is made up of unbaked clay artefacts.At least three quarters of preserved Latin inscriptions are rendered on stone.The vast majority of preserved Latin inscriptions were intended for public display, against some five or ten per cent of preserved cuneiform inscriptions.Cuneiform diverges also in the nature of its geographical distribution.Whereas a number of epigraphic corpora are distributed over a relatively high number of individual geographical locations with correspondingly fewer inscribed artefacts found at any one locale, cuneiform inscriptions tend to be found in comparatively larger assemblages distributed over a much smaller number of sites.These are but a few variables reflecting essentially different bodies of historical source material, underscoring the need for larger, data-driven perspectives as a critical prerequisite to fully grasp the processes of formation and deposition that has handed the cuneiform corpus down to us.
Conclusions
Reaching from Rome to the Himalayas, cuneiform inscriptions are encountered across a vast transect of the Eurasian and African land mass.Counting an estimated 540,000 inscriptions and fragments when including unprovenanced artefacts (Streck 2010), this corpus is one of the largest discrete bodies of writing from early human history available to us.Exploiting the intimate archaeological grounding of artefacts inscribed in cuneiform, the present study has provided a first provisional, quantitative overview of the geographical distribution of this immense corpus.In so doing, we have pointed to noticeable gaps in current digital research catalogues, highlighting an archaeological body of inscribed artefacts with a considerably higher degree of spatial prevalence than typically suggested in scholarly literature.Using a variety of basic variables, including finds distribution, assemblage size, surface extent of the associated archaeological feature, and derived analyses focusing on spatial density and prevalence, the preceding pages have suggested some provisional patterning in the geographical distribution of the corpus.The further exploration of broader, long-term trends in the production and consumption of writing through the analysis of metadata catalogues within a spatial frame of reference is liable to open up entirely new facets of the cuneiform world to substantive research.In providing an initial framework for such examinations, including open access data for further augmentation and reuse, the present study hopes to motivate future research within this area.
Figure 1 :
Figure 1: Point distribution of c. 600 locations of cuneiform finds included in the Cuneiform Inscriptions Geographical Site (CIGS) index version 1.6 (1 July 2023).Map prepared by Rune Rattenborg.
Figure 2 :
Figure 2: Locational accuracy distribution of archaeological sites with cuneiform finds according to select modern countries (n = 477).Data derived from CIGS version 1.6 (1 July 2023).
Figure
Figure 3a-b: Comparison of data coverage in the CIGS-AE index and the Cuneiform Digital Library Initiative (CDLI) catalogue.The first (a) shows the distribution of 597 locations and the spatial density of 429,398 inscriptions.The second (b) shows the distribution of 219 find locations and the density distribution of 260,861 geolocatable inscriptions retrieved from the CDLI catalogue (as of August 2020).Maps prepared by Carolin Johansson.
Figure 6 :
Figure 6: BM 90849.A diorite door socket fragment with an inscription of Gudea of Lagaš.Found during construction works in Knightrider Street, central London, in 1890.© The Trustees of the British Museum.Shared under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) licence.
Figure 9 :
Figure 9: Number of sites with cuneiform finds by site size (n = 301).Based on data from CIGS v. 1.6 (1 July 2023).Surface extent derived from polygon vector data prepared by Carolin Johansson.
Figure 10 :
Figure 10: Distribution of size of sites with cuneiform finds by select modern countries (n = 253).Based on data from CIGS v. 1.6 (1 July 2023).Surface extent derived from polygon vector data prepared by Carolin Johansson.
Figure 12 :
Figure 12: Kernel density estimation of locations with finds of cuneiform inscriptions.Data derived from CIGS v. 1.6 (1 July 2023), including 597 locations.Map prepared by Carolin Johansson.
Table 1 :
Locational accuracy of archaeological sites included in CIGS version 1.6 (1 July 2023)
Table 3 :
Distribution of site sizes based on data from CIGS v. 1.6 (1 July 2023).Surface extent derived from polygon vector data prepared by Carolin Johansson. | 2023-12-06T14:09:40.873Z | 2023-11-01T00:00:00.000 | {
"year": 2023,
"sha1": "1d38986e6c369c93788facc6200cf3294a3e428b",
"oa_license": "CCBY",
"oa_url": "https://www.degruyter.com/document/doi/10.1515/aofo-2023-0014/pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8500d270be3c89bef5d93a9a2afbbe2548323d73",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": []
} |
271160117 | pes2o/s2orc | v3-fos-license | Fetal alcohol spectrum disorder resources for health professionals: a scoping review
Abstract Objectives This scoping review aimed to identify and critically appraise resources for health professionals to identify, diagnose, refer, and support individuals with fetal alcohol spectrum disorder (FASD)—including the extent to which the resources are appropriate for use in communities with First Nations Peoples. Method Seven peer-reviewed databases (April 2022) and 14 grey literature websites (August 2022) were searched. The reference lists of all sources that underwent full-text review were handsearched, and FASD experts were consulted for additional sources. Resources were assessed using the Appraisal of Guidelines for REsearch and Evaluation II instrument and an adapted version of the National Health and Medical Research Council FORM Framework and iCAHE Guideline Quality Checklist. Results A total of 41 resources underwent data extraction and critical appraisal, as screening and/or diagnosis guidelines were excluded because they are covered in other reviews. Most were recently published or updated (n=24), developed in the USA (n=15, 36.6%) or Australia (n=12, 29.3%) and assisted with FASD patient referral or support (n=40). Most management guidelines scored 76%–100% on overall quality assessment (n=5/9) and were recommended for use in the Australian context with modifications (n=7/9). Most of the guides (n=15/22) and factsheets (n=7/10) received a ‘good’ overall score. Few (n=3/41) resources were explicitly designed for or with input from First Nations Australians. Conclusion High-quality resources are available to support health professionals providing referrals and support to individuals with FASD, including language guides. Resources should be codesigned with people living with FASD to capture and integrate their knowledge and preferences.
INTRODUCTION
Fetal alcohol spectrum disorder (FASD) is a diagnostic term for a condition that can result from prenatal alcohol exposure (PAE). 1 FASD is characterised by neurodevelopmental impairment associated with a range of psychological, emotional and behavioural difficulties and congenital anomalies. 2 3][6] However, people with FASD display strength and resilience in the face of these challenges, including self-awareness, human connection and receptivity to support. 7The global prevalence of FASD among children and youth in the general population is estimated to be 7.7 per 1000, 8 with similarly high rates observed among children in Western countries like the USA, 9 Canada 10 and the UK. 11The prevalence of FASD among children in some marginalised populations is 10-40 times higher than the global estimate, 12 suggesting that social
STRENGTHS AND LIMITATIONS OF THIS STUDY
⇒ First scoping review to identify and appraise publicly available resources to aid health professionals with referral and support for people with fetal alcohol spectrum disorder and to include a focus on resources for health professionals working in First Nations communities.⇒ The review follows the JBI Manual for Evidence Synthesis framework and uses the Preferred Reporting Items for Systematic Reviews and Meta-Analyses for Scoping Reviews checklist to improve the reporting of scoping reviews.⇒ The Appraisal of Guidelines for REsearch and Evaluation II instrument and a combined, modified version of the International Centre for Allied Health Evidence Guideline Quality Checklist and the National Health and Medical Research Council FORM framework, were used to critically appraise resources.⇒ Resources unavailable in English or requiring payment for access were excluded.Videos were gathered but did not undergo data extraction or appraisal because they did not align with the study's data extraction and critical appraisal tools.
Open access
and economic disadvantage contributes to the risk of FASD as it does with other health outcomes. These children displayed high rates of neurodevelopmental delay, [20][21][22][23][24] behavioural challenges 25 and increased hospital admissions. 26Health services in remote communities in Australia often lack sufficient health professionals and facilities to address the increased needs of this population. 27 28In remote communities, young First Nations Australians with FASD may have increased contact with child protection and criminal justice systems, [29][30][31][32] highlighting the importance of early diagnosis and adequate support. 33ealth professionals are well positioned to deliver FASD prevention 34 and facilitate integrated care, referral and support for individuals with FASD and their caregivers. 357][38][39][40] These challenges may be exacerbated in cross-cultural settings, where health professionals and social workers report challenges in discussing FASD and concerns for cultural appropriateness. 41Relatedly, there are calls for the integration of First Nations and Western wisdom into how health services engage with First Nations communities on FASD 42 and increased First Nations leadership in the codesign of resources and campaigns for these communities. 437][38][39][40] Further, as noted above, health professionals working in cross-cultural settings, including First Nations Australian communities, 41 face increased challenges to engage with families around FASD.Given the high prevalence 12 17-19 and burden [20][21][22][23][24][25][26][27][28][29][30][31][32][33] of FASD among minority and marginalised populations, equipping health professionals to play an increased and more effective role in FASD prevention and support may have major public health gains.Consequently, in this scoping review, we aimed to identify, analyse and critically appraise publicly accessible FASD resources specifically designed to assist health professionals to identify, diagnose, refer or support people with FASD.We also aimed to evaluate the appropriateness of the resources for health professionals working with First Nations communities.Our working definition of the term 'resources' refers to the successive itemisation of instructions in the form of frameworks, guidelines, guides, factsheets, tools, instruments, applications or models developed for FASD.
METHODS
The scoping review used a previously developed framework 51 and reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses for Scoping Reviews (PRISMA-ScR), 52 organised into nine stages based on an updated guideline for scoping reviews. 53Details are outlined in the scoping review protocol. 54An overview is described below, including changes to the published protocol.
Stage 1: research question and objectives
The primary research question was: What resources or guidelines are available for health professionals for the diagnosis, assessment, referral (including referral for management) or management of FASD?Secondary research questions were: What is the evidence base, applicability, generalisability and overall credibility of the resource?and What is the key purpose of the resource, including screening, diagnosis, referral for treatment and other learning and psychosocial supports?After the publication of the study protocol, 54 an additional aim was added to explore the extent to which the resources are culturally appropriate for First Nations Australians.
Stage 2: inclusion criteria
Sources were obtained from peer-reviewed and grey literature searches and were defined as primary research studies, systematic reviews, books, policies and websites.Resources were defined as a product or output from a source, including guidelines, guides, factsheets, videos, podcasts, apps and online learning materials.The study protocol presents the inclusion criteria, 54 including the language requirement that resources must be published in English.Consequently, non-English resources, including those written in languages for some minority or marginalised groups, were excluded from this scoping review.This limitation is addressed below.
Stage 3: search strategy
In 2022, peer-reviewed databases (n=7) and grey literature websites (n=14) were searched to identify resources, including the Australian Department of Health (n=9) and national/international FASD organisations' (n=5) websites.Consultations with FASD experts were then undertaken to identify other potential resources.Details are presented in tables 3 and 4 of the study protocol 54 and online supplemental table S1.A concept table was developed for each database and website searched, consistent with the PRISMA-ScR checklist. 52The Medline electronic Open access search strategy is presented in online supplemental table S2.Data were exported to EndNote reference management software and then Covidence software, where duplicates were removed.
Stages 4-5: screening and selection
Using Covidence, screening and selection were conducted following the PRiSMA-ScR statement and checklist in three phases. 53Titles and abstracts were screened by one coauthor (HN) and repeat screening of 20% was conducted by another (JCO).Sources then underwent full-text screening by one coauthor (HN) to identify potentially relevant resources and repeat screening of 20% by a second (JCO).Title/abstract and full-text screening revealed high inter-rater reliability (95% and 98% agreement, respectively).Resources were also retrieved from handsearching the reference lists of sources identified during phase two of the screening.Any potentially relevant resource was imported for full-text review and retrieval of resources for data extraction.
Several published systematic reviews of FASD screening tools 45 55 and diagnostic guidelines exist, 56 including a registered systematic review of FASD diagnostic guidelines. 57Consequently, all resources that focused only on screening and/or diagnosis were excluded from data extraction and critical appraisal.Health professionals seeking information on FASD screening tools and diagnostic guidelines can access them through current and future publications.
One coauthor (TS) reviewed a selection of videos identified and deemed that their content varied greatly from that included in the study's data extraction and critical appraisal tools so they were excluded.Consequently, data extraction was only completed for guidelines, guides and factsheets focused on referral/management or policy/ broad topics.
Stage 6: data extraction
Four coauthors (JCO, LC, HN and LJR) performed pilot data extraction from randomly selected resources to test the suitability and efficiency of the data extraction template. 58The template was then modified to better suit the research questions and objectives.Following the pilot, two coauthors (TS and HN) conducted data extraction using the modified version of the extraction template (table 1).Then, another coauthor (LC) reviewed 24% of the extracted resources (n=10) to check that the data extraction template had been applied appropriately and consistently, resulting in minor additions to the data extraction and a high level of agreement (96.4%).
Stage 7: quality appraisal Quality appraisal of resources categorised as 'guidelines' was conducted using the Appraisal of Guidelines for REsearch and Evaluation II (AGREE II) instrument, 59 with an additional 'Applicability-First Nations Australians' domain (four items) to align with the study's additional focus on this population (online supplemental table S3).Weighted scores were calculated for each domain and overall scores were calculated for each resource, including a judgement as to the resource's fit for recommendation.Other resources categorised as 'guides' or 'factsheets' were appraised using a modified appraisal, 50 a combination of the International Centre for Allied Health Evidence (iCAHE) Guideline Quality The resource (1) mentions how it can be used by First Nations Australians, (2) was designed specifically for health professionals working in First Nations Australian communities or (3) was designed with input from First Nations Australians FASD, fetal alcohol spectrum disorder.
Open access
Checklist 60 and the National Health and Medical Research Council (NHMRC) FORM framework. 61 62As this scoping review had a health focus, a previously developed tool 50 was modified to align with the NHMRC FORM framework, as shown in the appraisal tool (online supplemental table S4).A pilot was conducted in which two coauthors (JCO and LC) appraised 10 resources and discussed discrepancies with the research team.Each tool component received a grade ranging from A (excellent) to D (poor).Applicability for the Australian context was assessed to establish relevance specifically for Australian health professionals.The item on applicability to patient populations was modified to include a subcomponent on First Nations Australians, in line with the review's additional aim.One coauthor (TS) conducted the quality appraisal of resources.Another coauthor (LC) completed the quality appraisal of 24% (n=10) of all resources that underwent data extraction, including guidelines (n=4) using the AGREE II instrument and guides (n=5) and a factsheet (n=1).The modified appraisal tool was used to ensure consistency and reliability of the process. 50An interrater reliability score of 88.9% was obtained for both quality appraisal tools.The two coauthors then met to discuss and reconcile the scoring differences.
RESULTS Stage 8-9: data reporting & evidence summary
Screening and selection of resources A total of 3542 records were identified via database and grey literature searches, and 583 duplicates were removed before screening (figure 1). 63The remaining records (n=2959) were screened and assessed for eligibility.90 records were identified via other methods (including reference list screening), from which 35 duplicates were removed, and 17 records were excluded.A total of 101 records met the review's inclusion criteria, including 63 records from the database and grey literature searches and 38 records identified by handsearching reference lists and FASD expert referral.
Overview of resources
The resources (n=101) eligible for inclusion in the review included guidelines (n=18, 17.8%); guides (n=30, 29.7%); factsheets (n=12, 11.9%); screening tools (n=11, 10.9%); diagnosis tools (n=6, 5.9%) and videos (n=24, 23.8%).All resources (except the videos) were categorised based on their focus on identification/screening, diagnosis, referral/management and policy/broad topics, with many assigned to more than one topic.Because systematic reviews of FASD screening tools 45 55 and diagnostic guidelines exist 56 or are underway, 57 resources were later excluded if they focused predominately on screening or diagnosis (n=36; 35.6%) (see online supplemental table S5) or were videos (n=24, 23.8%) (see online supplemental table S6).Consequently, this review included guidelines, guides and factsheets covering primarily referral/management or policy/broad topics.
The resources included outcome measures or recommendations that covered various themes related to FASD referral/management or policy/broad topics.These included behaviour, social, physical and neurological characteristics of FASD; case management; comorbidities; communication, sleep, nutrition, mental health, hearing, and dental assessment and management; education for health professionals, families and other services providers; health service enhancement; management across the lifespan; management evaluation; management plans; medical, non-medical, psychological and educational interventions; models of care; policy and advocacy considerations; recommended language around FASD; referral pathways; risk and protective factors of those living with FASD and support information and services.
Resources for First Nations Australian communities
Only three resources (7.3%), including two guidelines and one guide, were designed specifically: (a) for use with First Nations Australians, (b) for health professionals in First Nations Australian communities or (c) with input from First Nations Australians. 67 68 78On quality appraisal, the two guidelines scored 58.3% and 41.7% for applicability-First Nations Australians (domain 5b). 67 68The guide scored 'excellent' for applicability to patient populations-First Nations Australians (item 4b). 78Although not explicitly designed for use with or input from First
Open access
Nations Australian populations, all other guides (n=21) and all factsheets (n=10) scored 'excellent' or 'good' for applicability to patient populations-First Nations Australians (item 4b).
DISCUSSION
This scoping review identified, analysed and critically appraised publicly accessible resources for health professionals to refer or support people with FASD.It also evaluated the appropriateness of the resources for health professionals working in First Nations Australian communities.A total of 101 resources were identified, of which 41 underwent data extraction and critical appraisal.All are current and suitable to the specific service settings and roles in which health professionals work with people with FASD and their families.Most were published or updated in 2013 or later, covered various aspects of the health system (face to face with patients, administration, policy) and were based on evidence from research literature or expert judgement and evidence from the research literature.The characteristics of resources included in this review may aid health professionals in identifying the type and content of resources most relevant to their specific settings and patient profiles.Further, the resources identified in this review are more applicable to health professionals in their roles in healthcare settings than the resources identified in previous reviews that focused on FASD resources for use in the general community [44][45][46][47][48] or by education professionals. 50he current review focused on resources for referring or supporting people with FASD and policy/broad topics.Given this focus, health professionals could use these resources alongside resources identified in previous reviews that focused on FASD screening tools 45 55 and diagnostic guidelines 56 or in a review currently underway on FASD diagnostic guidelines. 57Our review and others address critical gaps in the literature concerning a lack of educational FASD resources and standardised tools purposefully designed for health professionals. 1 PAE can affect all aspects of the body and brain, 105 making FASD a heterogeneous disorder that requires contact with a range of health professionals.As such, most resources were aimed toward broadly defined 'health professionals'.Although these resources were widely accessible, the content typically provided only brief overviews of broad topics, such as characteristics and symptoms of FASD, models of care, strategies for supporting people with FASD or language/terminology.A few resources focused on allied health professionals (occupational therapists, speech and language therapists, social workers), mental health professionals (psychiatrists, psychologists, behavioural specialists) and those engaged at the policy-making, health system and programme levels.These targeted resources tended to provide more in-depth, discipline-specific information.For example, there was a psychiatrist's guide for managing the psychiatric and neurodevelopmental disorders of FASD.The limited symptom-specific or clinician-specific resources likely reflect the paucity of research on evidence-based management strategies for FASD, particularly in comparison to other neurodevelopmental disorders, like autism spectrum disorder.Future research is needed to address this evidence gap and improve understanding of FASDspecific treatments and management strategies for common functional impairments in FASD and development of evidence-based, clinician-specific resources by experts in the field would also be valuable.
Only five of the resources reportedly obtained input from people living with FASD, including two guidelines, 65 66 two guides 79 91 and one factsheet. 104This is a crucial weakness of the current resources that should be addressed by increased efforts to capture and intergrate the perspectives of those with lived experience of FASD into resources for health professionals.This apporach may help ensure that the development and deployment of resources are better aligned with contexts, needs and preferences of people with FASD and their caregivers and families. 49People with lived experience of FASD should also be invited to provide input, through consultation or codesign, into policies, programmes and services focused on prevention, screening, diagnosis and management/ supports.
This review identified high-quality resources to assist health professionals in engaging with patients and families with FASD.Nearly all resources included information on working face to face with patients, most guidelines were recommended for use in the Australian context with modifications, and all the guides and factsheets scored 'good' or 'excellent' in terms of generalisability and credibility.However, some resources were outdated and scored low on specific criteria, such as rigour of development and clarity of presentation.The literature shows that health professionals have an essential role in FASD prevention 34 and in delivering integrated care and support for individuals with FASD and their families. 35onsequently, the resources, despite their limitations, may help address an important gap in health professionals' knowledge, skills and confidence to engage with patients and families on FASD-related topics, incuding their hesitancy to provide screening, diagnosis, support and prevention services. 36 38-40 106Although research has shown that the distribution of FASD prevention resources was well received by paediatricians in Australia, it had limited impact on their knowledge and practices-highlighting the limitation of educational resources alone to change practice. 37Moreover, some resources for working face to face with patients included language guides on FASD that may be helpful for health professionals. 79 87 88 91 92Using appropriate language about FASD is important for creating a respectful, non-judgemental and non-stigmatising environment to discuss PAE 1 and provide integrated care and support services to people with FASD and their families. 79Most of the language guides on FASD were recently updated, but one published over ten years ago includes out-of-date language. 92As Open access such, health professionals should only use terminology in current best-practice guides.
Although the scoping review's findings have implications for health professionals globally, they are particularly relevant to enhancing their role in improving FASD-related outcomes in Australia.Nearly one-third of the resources were published in Australia (n=12, 29.3%), including some guidelines (n=2, 22.2%), and guides (n=4, 18.2%), and most of the factsheets (n=6, 60.0%).Moreover, both guidelines published in Australia were developed by reputable organisations 67 68 and all the guides and factsheets published in Australia scored 'excellent' for applicability to Australian patient populations.The resources published in other high-income or middle-income countries, with a similar socioeconomic environment to Australia, may be relevant in this context; however, they may not be directly relevant to countries with different health systems, such as the USA or lowincome countriess.These findings align with the Australia National FASD Strategic Action Plan 2018-2028 that aims to (1) reduce the prevalence of FASD, (2) reduce the associated impacts of FASD and (3) improve the quality of life for people with FASD, including through increasing access to appropriate diagnostic and support services to improve care and outcomes for people with FASD and provide education and training for health and community service providers to ensure they have the knowledge and confidence to diagnose and support people with FASD. 33lthough the resources align with the Australian context at the national level, they are less suitable for First Nations Australians, a priority population in the Australia National FASD Strategic Action Plan 2018-2028. 33Given the high prevalence of PAE and FASD in some remote First Nations Australian communities, [17][18][19] and the significant adverse impacts of FASD on child neurodevelopment, [20][21][22][23][24] education attainment 25 and hospital admission rates, 26 support for health professionals in these settings is an imperative.However, only 3 (7.3%) of the 41 resources identified (2 guidelines and 1 guide) were designed for use with First Nations Australians, for health professionals in First Nations Australian communities, or with input from First Nations Australians. 67 68 78On quality appraisal, these two guidelines received overall scores of 58.3% and 41.7% for applicability to First Nations Australian patient populations and the guide scored 'excellent' for this domain.Despite their limitations, these and other culturally appropriate resources for FASD prevention among First Nations Australians 107 would be useful for health professionals in these settings.However, this study highlights a gap in resources for health professionals supporting First Nations communities.
0][31][32] Additionally, to improve outcomes for First Nations Australians with FASD, it is crucial to acknowledge and address the significant, ongoing impacts of colonisation and intergenerational trauma on their health and well-being, [13][14][15][16] and the barriers to health service access that hinder efforts to address these increased needs. 27 28Further, crosscultural barriers heighten challenges of health service provision for First Nations Australians, with some health professionals and support workers reporting a lack of understanding of culturally appropriate ways to engage about FASD. 41Thus, the Australia National FASD Strategic Action Plan 2018-2028 calls for culturally appropriate FASD prevention and support for First Nations Australian communities, drawing on successful examples across Australia. 33Relatedly, the Australian FASD Indigenous Framework argues for development of more First Nations-grounded, strengths-based, healing-informed approaches-that are based on holistic and integrated support-into existing health services for FASD with First Nations Australians, including how health professionals engage in communities. 42
LIMITATIONS
This scoping review has several limitations that should be considered alongside the findings.First, the study only included resources published in English.This is an important limitation because non-English resources would be crucial for health professionals working in various settings, including with specific minority or marginalised groups.Second, videos were excluded because the data extraction and critical appraisal tools used were developed for written resources and were unsuitable.This does not discount the potential value of video resources for health professionals supporting people with FASD and their families.Third, the two coauthors who extracted data and critically appraise all resources did not compare all their appraisals; however, inter-rater reliability and agreement were high for the random sample of resources assessed by two coauthors.Fourth, authors of resources were not contacted to clarify their development process, including the involvement of stakeholders including people with lived experience of FASD.Some resources may have been developed using this consultation process, even though this information was not provided.Fifth, although the review included input from FASD experts and several coauthors who had indirect lived experience of FASD (EC, ST, EJE, LJR and ALCM), including in remote First Nations Australian communities, input from those with lived experience of FASD was limited.Future studies would benefit from including their perspectives.Sixth, this review excluded resources that predominately focused on FASD screening or diagnostic guidelines because systematic reviews on these resources exist 45 55 56 or are underway. 57This narrow focus is a potential limitation as some health professionals may want easy access to different resources in one location and based on their specific needs and patient populations.We would direct health professionals to access published and ongoing reviews for information on screening and diagnostic tools Open access and this review for resources on primarily referral/management or policy/broad topics.Despite these limitations, this is the first review of its kind, and it provides valuable information to inform health policy and education strategies for health professionals.
CONCLUSIONS
People with FASD have an increased risk of poor educational, social and health outcomes in early and later life.Health professionals are suited to deliver FASD prevention, screening, and diagnosis as well as to provide support to people with FASD and their families.However, they often lack the knowledge, confidence and resources to deliver these services, particularly in cross-cultural settings.This scoping review identified high-quality guidelines, guides and factsheets to support health professionals in providing referrals and support for FASD, including various guides on the appropriate and respectful use of language on FASD.Resources covering FASD screening and diagnosis are published elsewhere, but the resources identified in this study will assist health professionals to provide integrated care for people with FASD and their families.Since the scoping review identified only three FASD resources developed for use in and with input from First Nations Australians, efforts are needed to better assist health professionals to effectively support these communities.People living with FASD should lead the codesign of new resources to ensure their perspectives and preferences are captured and integrated.Contributors TS conducted the data extractions and critical appraisals for all 41 resources.As first author, he was responsible for writing the final manuscript.LC conducted the grey literature search (n=14 websites), piloted the data extraction template, reviewed data extractions and critical appraisal for 24% of resources and made substantial contributions to the final manuscript and supplementary material.EC conceptualised the study, assisted with the grant application and provided context to the need for resources in First Nations communities.MB conceptualised the study, assisted with the grant application and provided context to the need for resources in First Nations communities.JD conceptualised the study and provided context to the need for resources in First Nations communities.HN conducted all the title and abstract screening, full-text screening and handsearching of the reference lists, piloted the data extraction template and completed the PRISMA flow diagram.JCO conducted the search of 7 databases, 20% screening for title/ abstract and full-text review, was involved in piloting the data extraction template and contributed to the first draft of the manuscript.ALCM provided critical revisions to the manuscript.ST conceptualised the study, assisted with the grant application, provided context to the need for resources in First Nations communities and provided critical revisions tothe manuscript.EJE assisted with conceptualising the study, obtaining funding, writing the protocol, developing the critical appraisal template, provided expert consultation to identify resources and refined the manuscript.LJR assisted with conceptualising the study, obtaining funding, writing the protocol, developing the critical appraisal template and piloting the data extraction template and provided expert consultation to identify resources.All authors approved the final version.LJR is the guarantor.
Figure 1
Figure 1 PRISMA 2020 flow diagram for resource screening and selection.FASD, fetal alcohol spectrum disorder; PRISMA, Preferred Reporting Items for Systematic Reviews and Meta-Analyses.
and Adolescent Health, University of Sydney, Faculty of Medicine and Health, Sydney, New South Wales, Australia 2 School of Public Health, University of Sydney, Faculty of Medicine and Health, Sydney, New South Wales, Australia 3 Marulu Unit, Marninwarntikura Women's Resource Centre, Fitzroy Crossing, Western Australia, Australia 4 Office of the Chief Scientist, The George Institute for Global Health, Sydney, New South Wales, Australia 5 Dalla Lana School of Public Health, The University of Toronto, Toronto, Ontario, Canada 6 NDIS Remote Community Connector Team, Marra Worra Worra Aboriginal Cooporation, Fitzroy Crossing, Western Australia, Australia 7 Sydney Children's Hospital Network and Kid's Research, Westmead, Sydney, New South Wales, Australia
Table 1
Data extraction table for FASD resources for health professionals Details Reference Author(s), resource title, publication/updated year and link Type Factsheets, guides, guidelines, videos, websites or screening tools Format Journal article, report or other Purpose/aim Overall purpose/aim of the resource for health professionals Country of origin Resource country of origin Health service level Level of health service that the resource is focused on supporting, including policy, administrative, face to face with patients Health professional Health professionals who are the intended audience or would benefit from using the resource Focus The primary objective of the resource is to support health professionals working with those with FASD with (1) screening, (2) diagnosis, (3) referral/management, (4) referral/management (specifically language guide), (5) policy/broad and (6) prevention information Resource outcome measure(s) and/or recommendations The resource outcome measure(s) and/or recommendations Evidence base of the resource Details regarding the evidence base of the resource, including none reported, expert judgement or literature/clinical research Applicability to First Nations Australians
Table 2
Characteristics of resources by type *Some resources covered more than one characteristic.
Table 3
Overall quality scores and domain scores of guidelines using the AGREE II instrument Domain 1: scope and purpose; domain 2: stakeholder involvement; domain 3: rigour of development; domain 4: clarity of presentation; domain 5a: applicability-Australian context; domain 5b: Applicability-First Nations Australians; domain 6: editorial independence.*Scores on domain 5b were not included in overall assessment scores.AGREE II, Appraisal of Guidelines for REsearch and Evaluation II; NACCHO, National Aboriginal Community Controlled Health Organisation; RACP, Royal Australasian College of Physicians; SAMHSA, Substance Abuse and Mental Health Services Administration; SIGN, Scottish Intercollegiate Guidelines Network.
Table 4
Item scores for guides and factsheets based on the modified NHMRC and iCAHE appraisal tool | 2024-07-15T06:17:31.479Z | 2024-07-01T00:00:00.000 | {
"year": 2024,
"sha1": "20f54b251fd7136ee0d42e57a10e6ea72df20cbf",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1136/bmjopen-2024-086999",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c29a99c2d0dd0c4b7f564327861eb62e0755994b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14732085 | pes2o/s2orc | v3-fos-license | Heat Content, Heat Trace, and Isospectrality
We study the heat content function, the heat trace function, and questions of isospectrality for the Laplacian with Dirichlet boundary conditions on a compact manifold with smooth boundary in the context of finite coverings and warped products.
where each eigenvalue is repeated according to multiplicity. Two Riemannian manifolds M 1 and M 2 are said to be isospectral if Spec(∆ M1 ) = Spec(∆ M2 ). We refer to [11] for further details concerning isospectrality.
Operators of Laplace type.
It is convenient to work in slightly greater generality -this will be important in Section 3 when we discuss warped products. An operator D is said to be of Laplace type if the leading symbol of D is given by the metric tensor or, equivalently, if we may express in any system of local coordinates x = (x 1 , ..., x m ) the operator D in the form: where we adopt the Einstein convention and sum over repeated indices; here a i and b are smooth functions and g ij is the inverse of the metric g ij := g(∂ xi , ∂ xj ). Let 1.3. The heat equation. Let φ ∈ C ∞ (M ) define the initial temperature of the manifold. Let D be an operator of Laplace type on C ∞ (M ). The subsequent temperature distribution u := e −tD φ for t > 0 is defined by the following relations, we refer to [8] for a further discussion of the heat process: The special case that D = ∆ M is of particular interest. Let be the Fourier coefficients. We may then express 1.4. The heat content. Let ρ be the specific heat and let φ be the initial temperature of the manifold. The total heat energy content is then defined to be: The heat content is expressible for the Laplacian in terms of the Fourier coefficients: We shall assume ρ and φ are smooth henceforth. We refer to [2] for some results in the non-smooth setting where φ is allowed to blow up near the boundary and to [5] where the boundary is polygonal. The total heat energy β M (t) of M is defined by taking φ(x) = ρ(x) = 1; (1. The total heat energy content of the manifold is a scalar function which is an isometry invariant of the manifold. For example, if M = ([0, π], dx 2 ) is the interval with the standard metric, then 1.5. The heat trace. Let D be an operator of Laplace type on C ∞ (M ). The operator e −tD is an infinitely smoothing operator. If f ∈ C ∞ (M ) is an auxiliary function which is used for localization or smoothing, then f e −tD is of trace class and Tr L 2 (f e −tD ) is well defined. We shall assume that f is smooth and refer to [3] for some results in the non-smooth setting where f is allowed to blow up near the boundary. We also refer to [16] for results concerning Riemann surfaces with corners.
If we take f = 1 and let D = ∆ M , then is a spectral invariant which determines Spec(∆ M ).
1.6. Local invariants. We can extract locally computable invariants from the heat content and from the heat trace as follows. Let D be an operator of Laplace type on C ∞ (M ) and let f, ρ, φ ∈ C ∞ (M ). Work of Greiner [12] and of Seeley [17,18] can be used to show that there is a complete asymptotic expansion Similarly, see the discussion in [2,4], there is a complete asymptotic expansion: These invariants are locally computable and have been studied by many authors; we refer to [10] for a more complete discussion of the history of the subject. To simplify the discussion, we shall only consider the special case where D = ∆ M and where f = ρ = φ = 1. We define the following local isometry invariants of the manifold: a n (M) := a n (1, ∆ M ) and β n (M) := β n (1, 1, ∆ M ) . Let indices i, j, k, l range from 1 to m and index a local orthonormal frame {e 1 , ..., e m } for the tangent bundle of M . Let R ijkl be the components of the Riemann curvature tensor; our sign convention is chosen so that R 1221 = +1 on the sphere of radius 1 in R 3 . Let ρ ij := R ikkj be the Ricci tensor and let τ := ρ ii be the scalar curvature. Near the boundary we normalize the choice of the local frame by requiring that e m is the inward unit geodesic normal. We let indices a, b, c, d range from 1 through m − 1 and index the restricted orthonormal frame {e 1 , ..., e m−1 } for the tangent bundle of the boundary. Let L ab := g(∇ ea e b , e m ) be the components of the second fundamental form. We can use the Levi-Civita connection on M to multiply covariantly differentiate a tensor defined in the interior; we let ';' denote the components of such a tensor. Similarly, we can use the Levi-Civita connection of ∂M := (∂M, g| ∂M ) to multiply covariantly differentiate a tensor defined on the boundary; we let ':' denote the components of such a tensor. The difference between ';' and ':' is measured by the second fundamental form.
Although formulas for a 5 (M) and β 5 (M) are known, we have omitted them in the interests of brevity. Formulas generalizing those in Theorems 1.1 and 1.2 are available in the more general setting to study the invariants a n (f, D) and β n (φ, ρ, D) for an arbitrary vector valued operator D of Laplace type; again, we shall omit details in the interests of brevity and instead refer to the discussion in [10]. Although we have chosen to work with Dirichlet boundary conditions, similar formulas exist for Neumann, transfer, transmittal, and spectral boundary conditions. The history of this subject is a vast one and beyond the scope of this brief article to give in any depth. We refer to [13] for a more detailed discussion of elliptic boundary conditions. 1.7. Relating the heat trace and heat content. McDonald and Meyers [15] have constructed additional invariants involving exit time moments which determine both the heat trace and the heat content; we also refer to related work [14] by these authors in the context of graphs.
It is difficult in general, however, to relate the heat trace and the heat content directly. In particular, there is no obvious relation between the formulas given in Theorems 1.1 and 1.2 when n ≥ 3. It is clear that Tr{e −t∆M } is determined by Spec(∆ M ) and it is clear that β M (t) is determined by the full spectral resolution S(∆ M ). It is not known, however, if the full heat content function β M (t) or in particular the heat content asymptotic coefficients β k (M) might be determined by Spec(∆ M ) alone. More specifically, one does not know if there are Dirichlet isospectral manifolds with different heat content functions. In the remainder of this brief note, we shall present some results which relate to this question. In Section 2 we discuss finite coverings and in Section 3 we discuss warped products.
2.2.
Heat trace and heat content asymptotics. The invariants a n (M) and β n (M) are locally computable. Since integration is multiplicative under finite coverings, the following result is immediate: Theorem 2.1. Let M 1 → M 2 be a finite k-sheeted Riemannian cover. Then a n (M 1 ) = ka n (M 2 ) and β n (M 1 ) = kβ n (M 2 ) for all n.
Heat trace.
We begin by presenting an example to show that there are examples where Tr L 2 (e −t∆M 1 ) = k Tr L 2 (e −t∆M 2 ) despite the fact that the heat content function is multiplicative under finite coverings. Let
Although this example is in the category of closed manifolds, we can construct other examples as follows. Let N = ([0, π], dθ 2 ) be a manifold with boundary. Let M i := N × M i and let π act only on the second factor. Since one has:
2.4.
Heat content asymptotics. It is perhaps somewhat surprising that in contrast to the situation with the heat trace asymptotics discussed in Section 2.3 that one has: Proof. Let {λ n , φ n } be a spectral resolution of ∆ M2 . Let c n = σ n (1) be the associated Fourier coefficients. We use Equation (2.a) to see that 1 = n c n φ n in L 2 (M 2 ) implies 1 = n c n π * φ n in L 2 (M 1 ) .
Since ∆ M2 π * φ n = π * ∆ M1 φ n = λ n π * φ n and since π * φ n satisfy Dirichlet boundary conditions, we have 2.5. Summary. Theorems 2.1 and 2.2 show that a Sunada construction involving finite coverings will not produce isospectral manifolds with different heat content functions as only the order of the cover is detected. If M is a Riemannian manifold which has constant sectional curvature +1, then M is said to be a sperical space form. If M is closed and if the fundamental group π 1 (M ) is cyclic, then M is said to be a lens space. Ikeda [6,7] and other authors have studied questions of isospectrality for spherical space forms; we refer to [9,10] for further details as the literature is an extensive one. These examples can easily be modified to the category of manifolds with boundary by punching out a small disk from M 2 and then lifting to get a spherical spaceform with boundary. Since there are spherical space forms with the same fundamental group which are not isospectral, neither the heat trace asymptotics nor the full heat content function determine either the spectrum of the manifold or the isometry type of the manifold. The normalizing constant 2 m is chosen so that one has the following relationship between the volume elements: We define an auxiliary operator of Laplace type on C ∞ (N ) by setting: Note that this operator is no longer self-adjoint if f is non-constant; this operator does, however, have the same spectrum as ∆ N since it is conjugate to this operator. We may then use Equation (1.a) to see that 3.2. The heat content. Let β(φ, ρ, D)(t) be the generalized heat content function defined in Equation (1.c).
Theorem 3.1 shows that the heat content does not even determine the dimension of the underlying manifold as only the volume of the manifold M appears in this formula. On the other hand, Equation (1.e) shows that the dimension of the underlying manifold is determined by the heat trace. Consequently, we once again see that the heat content function does not determine the underlying spectrum.
3.3. Isospectrality. We conclude our discussion by showing that isospectrality is preserved by the warped product construction.
Proof. Let M be a Riemannian manifold. Let {Φ i , µ i } be a spectral resolution of ∆ M . We decompose Let µ i e − 2 m f act by scalar multiplication. We use Equation (3.b) to see that the decomposition of Equation (3.c) induces a corresponding decomposition We may take N = [0, π] and assume that f (0) = f (π) = 0. We then have that ∂(N ×M ) is isometric to the disjoint union of two copies of M . Since there are many pairs of isospectral closed manifolds which are not isometric, Theorem 3.2 provides examples of isospectral manifolds with boundary given by warped products which are not isometric.
3.4. Conclusion. Theorems 1.1 and 1.2 show that the volume of the interior, the volume of the boundary, and the dimension of M are determined by the heat trace. Thus Theorem 3.1 shows that a warped product construction involving isospectral manifolds with a suitably chosen manifold with boundary will not produce isospectral manifolds with different heat content functions. Theorem 3.1 does show, however, that there exist manifolds with the same heat content function which are not isospectral. If we take f = 1 and apply the argument of Theorem 3.1, we see that the heat content function does not determine the dimension of the manifold.
There exist spherical space forms M 1 and M 2 which are isospectral but not diffeomorphic. If we take N = ([a, b], dx 2 ) with 0 < a < b and if we take as a warping function f (x) = x 2 , then the resulting warped products P i := N × f M i are flat isospectral manifolds whose boundaries are not diffeomorphic. | 2008-02-20T21:18:44.000Z | 2008-02-20T00:00:00.000 | {
"year": 2008,
"sha1": "789b797ccfbcd2e0c0a413ffc0e009ed55528d1c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0802.2948v1.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "789b797ccfbcd2e0c0a413ffc0e009ed55528d1c",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
262045090 | pes2o/s2orc | v3-fos-license | Reproductive performance effects of rearing the quasi-social parasitoid, Sclerodermus brevicornis (Hymenoptera: Bethylidae), on a factitious host
Abstract Wasps in the genus Sclerodermus are ectoparasitoids that typically attack the larvae of woodboring coleopterans. Interest in these species is increasing as they are used in programs to control longhorn beetle pests of economic importance in China and have invasive pest control potential in Europe. Wasps may be mass reared for field release, but using the target host species can be time consuming and physically demanding. There is thus a need for factitious hosts with lower production costs and that are easier to rear. The present research focuses on Sclerodermus brevicornis, which was found in Italy in association with the invasive longhorn beetle, Psacothea hilaris hilaris, and can be laboratory reared on this longhorn beetle and on a factitious lepidopteran host, Corcyra cephalonica. As it is known that the biology of natural enemies can be influenced by the host they emerge from and that the behavior of S. brevicornis is relatively complex due to its degree of sociality (multiple foundress females cooperate to paralyze the host and produce offspring communally), we explored whether, and how, performance and behavioral traits of adult females are influenced by the host species on which they were reared, both when no choice or a choice of current host species was offered. We evaluated the survival of foundresses and their movements between offered hosts and their tendency to form groups with other foundresses according to kinship and host characteristics. We also evaluated the production of offspring and the timing of their development. We found that S. brevicornis reared from C. cephalonica do have some disadvantages compared with those that have developed on P. h. hilaris but also that they recognize, prefer, and can reproduce on P. h. hilaris. We conclude that the use of the more convenient factitious host for mass-rearing is unlikely to greatly compromise the potential of S. brevicornis to suppress longhorn beetle pests in the field.
Introduction
Parasitoids in the genus Sclerodermus Latreille (Hymenoptera Bethylidae) are of interest to biological pest control practitioners due to their ability to attack longhorn beetle larvae after finding them within infested tree trunks and branches (Chen and Cheng 2000, Yang 2004, Kaishu 2006, Tang et al. 2012, Yang et al. 2014, Jiang et al. 2015).These parasitoids use volatiles to locate suitable habitats and, consequently, their hosts (Yang et al. 2005, Wang et al. 2011, Men et al. 2019).Species in the genus also exhibit quasi-sociality, in which several adult females may cooperate in the attack of large hosts, overcoming its defenses and gaining substantial resources for offspring development (Tang et al. 2014, Wei et al. 2014, 2017, Lupi et al. 2017).Furthermore, in the subsequent communal production and care of large broods of offspring (e.g., Tang et al. 2014, Abdi et al. 2020a, 2020b, Malabusini et al. 2022), foundresses maintain brood hygiene and assist the larvae during their development and the spinning of cocoons prior to pupation (Wheeler 1928, Hu et al. 2012, Yang et al. 2012).In S. brevicornis, the duration of brood development is typically less than 1 month, depending on temperature, host species, and the number of contributing foundresses (Lupi et al. 2017, Abdi et al. 2021).The sex ratios of Sclerodermus broods are strongly female biased (Abdi et al. 2020a, 2020b, 2021, Malabusini et al. 2022, Lehtonen et al. 2023).Males are the first offspring to mature (protandry), and they fertilize newly maturing females while they are still within their cocoons (Hu et al. 2012).
In Europe, current attention is centered on Sclerodermus brevicornis Kieffer, which was found in Italy in association with the invasive Asian longhorn beetle, P. hilaris hilaris (Pascoe, 1858) (Lupi et al. 2014), and was subsequently successfully reared on this host in the laboratory (Lupi et al. 2017).
However, rearing S. brevicornis on P. h.hilaris is labor intensive and expensive, and the use of factitious hosts that are easier to rear can assist the mass rearing of the parasitoids.Sclerodermus species naturally attack coleopteran larvae and prior work has shown that some coleopterans may be used as factitious hosts.For instance, the mealworm, Tenebrio molitor L., can be used for S. guani and S. sichuanensis (Kai et al. 2006, Zhuo et al. 2016, Guo et al. 2019).Tenebrio molitor is, however, not suitable for S. brevicornis development (D.L. personal observations).As some other bethylid wasp species naturally attack lepidopterans (Mayhew and Hardy 1998), speculative trials using larvae of the rice moth, Corcyra cephalonica Stainton (Lepidoptera: Pyralidae), were carried out and showed that this species could serve as a factitious host for S. brevicornis production (Abdi et al. 2021).A parasitism rate of around 75% was attained using C. cephalonica, which is similar to that achieved by S. brevicornis when provided with its beetle hosts (Lupi et al. 2017, Abdi et al. 2021).Further aspects related to the capacity of S. brevicornis reared on C. cephalonica to reproduce and survive low temperature storage are reported by Jucker et al. (2020).
While rearing parasitoids on factitious hosts can be advantageous in terms of short-term savings of space, time, and costs, there may be longer-term negative effects.Changes in parasitoid performance can arise immediately or after several generations of breeding on a given host (van Lenteren 2003, Riddick 2009).For instance, development on different host species may affect the size of developed adults and size may in turn influence subsequent host finding ability, longevity, and fecundity (Hardy et al. 1992, Visser 1994, Harvey 2000, 2005, Luck and Forster 2003, Karsai et al. 2006).Furthermore, parasitoids may use chemical cues associated with the host they developed from to inform their future foraging behaviors (Pomari-Fernandes et al. 2015, Bertin et al. 2017).Finding and recognizing hosts in the field can be a complex challenge for female parasitoids (Fellowes et al. 2023, Quicray et al. 2023) and may be more difficult for parasitoids that utilize cues associated with hosts employed in artificial rearing systems (Gandolfi et al. 2003).
In the current study, we assess whether the behavior and performance of adult female S. brevicornis, when presented with hosts of the target species (no-choice tests) or with two different host species (choice tests), are influenced by the species of host on which they have developed (host of origin).As S. brevicornis is quasi-social, we study groups of foundress females as well as the behavior of individuals within groups.As recent studies have found that kinship between Sclerodermus females influences host attack and reproductive behavior when a single host is presented (Abdi et al. 2020a, 2022b, Guo et al. 2022, 2023), we vary the foundress composition of groups to assess whether the host from which females emerge influences subsequent performance and the distribution of foundresses across hosts when a choice of hosts is available.
Host Rearing
The naturally adopted (invasive exotic) host, Psacothea hilaris hilaris (Pascoe) (Coleoptera: Cerambycidae) (Asian longhorn beetle), and the factitious host, Corcyra cephalonica Stainton (Lepidoptera: Pyralidae) (rice moth), were used in the present work to assess the biology and behavior of the parasitoid.Both hosts have been shown to be suitable for S. brevicornis development under laboratory conditions (Abdi et al. 2020a, 2021, Jucker et al. 2020).
A colony of the xylophagous beetle, P. h.hilaris, was reared on an artificial diet since 2013, as described in Lupi et al. (2015), in climate chambers at 26 ± 1 °C, a 16L:8D photoperiod, and a relative humidity of 60 ± 5%.The P. h.hilaris larvae used in the experiments reported here had a mean weight of 0.25 ± 0.0044 g (digital precision balance TE64, Sartorius AG, Germany).
The moth C. cephalonica was reared on an artificial diet for more than 30 generations prior to the current study (Limonta et al. 2009, Abdi et al. 2020b).Adults were kept in a plexiglass cage (36 × 26 × 25 cm), and, in order to obtain eggs, the females were collected and placed in a small glass container where they oviposited.After 2 days, eggs were collected from the bottom of this container using a brush and placed in a Petri dish (15 cm diameter, 2 cm height) filled to a depth of 1 cm with the artificial diet to feed the larvae after hatching.Corcyra cephalonica larvae used in the current experiment had a mean weight of 0.029 ± 0.0051 g.
Parasitoid Rearing
The rearing system of S. brevicornis was maintained in the laboratory since 2011 following the protocols detailed in Lupi et al. (2015Lupi et al. ( , 2017) ) and Favaro et al. (2017).Two separate rearing systems were set up: one using the "natural" host, P. h.hilaris, and other using the factitious host, C. cephalonica, each for more than 30 parasitoid generations.Colonies were maintained in a climate chamber at 25 ± 1 °C, 16L:8D photoperiod, and 60 ± 5% RH. Adult females were collected shortly after emergence and stored, in groups in vials, in a refrigerator at 4 ± 1 °C for around 15 days (Jucker et al. 2020) until used in the experiment.
Single Host: No-Choice Test
We evaluated host-of-origin effects when mature foundresses were not offered a choice of hosts.Each replicate (N = 40) consisted of one P. h.hilaris larva placed into a glass vial (8 cm height, 5 cm diameter, closed with cotton wool and a gauze) (Fig. 1).In half of the replicates, 2 S. brevicornis females that had developed on the same individual P. h.hilaris host were introduced into the vial.In the remaining replicates, 2 females that had developed on the same individual C. cephalonica host were introduced.Replicates were maintained inside a climate chamber (26 ± 5 °C, 16L:8D photoperiod, and 60 ± 5% RH) and were checked once per day, until the death of both foundresses (of no offspring were produced) or offspring emergence, for up to 50 days, under a stereo dissection microscope (MZ 12.5, Leica Microsystems GmbH, Wetzlar, Germany, and Wild Heerbrugg M5A, Leica Geosystems GmbH, Heerbrugg, Switzerland).The following parameters were monitored and recorded: foundress mortality (females were considered to be dead when no movement was detected when stimulated), offspring production (numbers and sexes of emerged adult offspring), and timing (days to host paralysis, days to oviposition, overall days taken for offspring to develop to adulthood).
Two Hosts: Choice Test
We evaluated host-of-origin effects when foundresses were offered a choice of hosts, with the hosts being of different species.Each replicate (N = 42) used a 3-sector Petri dish (height: 1.5 cm, diameter: 9.0 cm) in which one P. h.hilaris larva and one C. cephalonica larva were placed in separate sectors (Fig. 2).To prevent their movement from the sector, C. cephalonica larvae were pre-paralyzed by a female Goniozus legneri Gordh (Hymenoptera: Bethylidae) (maintained in the laboratory since 2016), which was removed once it had stung the host (following Abdi et al. 2021).Four S. brevicornis were added into the third sector of each Petri dish, with foundress group composition varied to be either four females that had developed on the same P. h.hilaris host ("4Psaco",14 replicates), 4 females that had developed on the same C. cephalonica ("4Corcy", 14 replicates), or 2 females that had developed on a P. h.hilaris plus 2 females that had developed on a C. cephalonica ("2P+2C", 14 replicates) (Fig. 2).Replicates were maintained in a climate chamber at 26 ± 1 °C, 16L:8D photoperiod, and 60 ± 5% RH.
The "barriers" (low walls) between the sectors within each Petri dishes prevented the movement of host larvae between sectors, but parasitoids could pass over them with ease and were consequently free to move within the entire Petri dish and to have contact with either or both of the hosts.To track foundress movement and position, individual females were marked with a dot of nontoxic colored paint (Posca Marking Pen, Japan, tip diameter 0.9 cm) on the middle of the dorsal surface of the pronotum."Foundress movement" was defined as when a given female was observed in association with a different host than in the previous observation.
Each replicate was observed three times per day (at 10 am, 1 pm, and 4 pm) until offspring pupation, or for up to 50 days, under a stereo dissection microscope.Observations were ceased when all adults emerged or when both hosts in a replicate dried up and no parasitoid offspring survived (from eggs that had been laid on at least one of the hosts); replicates that did not reach a given stage were considered as censors in analyses of timing.
If one host larvae within a replicate died (became desiccated) within the first week of monitoring, it was replaced by a fresh host of the same species and of similar weight.Similarly, when a S. brevicornis foundress died during the first week of the trial, it was replaced by a female from the same brood.If at least one host larva died after the first week of monitoring and if S. brevicornis females had not laid eggs on either host, the replicate was excluded.If more than one S. brevicornis female died between the end of the first week of monitoring and the hatching of larval offspring, the replicate was excluded.Replicates containing 2 pairs of sibling S. brevicornis were excluded if one or more of the females died after the first week.These adjustments were made to retain focus on the behavior of "full" groups of females throughout the observation period, rather than to document only the consequences of initial foundress group composition.To obtain the sample sizes given above, excluded replicates were recreated and monitored following the same methodology.
At each observation time, the following information on each S. brevicornis female was recorded: parasitoid position within each sector (on the P. h.hilaris larva, on the C. cephalonica larva, or "around" [i.e., on neither larvae and thus elsewhere within the Petri dish]); the death of any foundresses; the presence and number of parasitoid eggs, larvae, or pupae on each host; and the numbers and sexes of any mature adults.Brood sex ratio was defined as the number of adult males divided by the total brood size.
Statistical Analysis
We employed generalized linear models (GLMs) and generalized linear mixed models (GLMMs) to explore the effects of experimental treatments on parasitoid performance.GLMs were used for analyses of a single response per replicate (Aitkin et al. 1989), and GLMMs were used when analyses concerned multiple observations per replicate (Bolker et al. 2009).Log-linear analysis, with a log link function, was used for analyses of integer response variables (Aitkin et al. 1989, Crawley 1993) and logistic analyses, with a logit link function, were used for most analyses of proportional response variables (Crawley 1993, Wilson andHardy 2002).In log-linear analysis and logistic analyses of grouped binary data, quasi-Poisson and quasi-binomial distributions of residuals were adopted, using empirically estimated scale parameters, to take potential over-or underdispersion into account (Crawley 1993, Wilson and Hardy 2002, Hardy and Smith 2023).
For data on the proportion of observations of foundresses on each host, angular transformation was used prior to Gaussian parametric analysis, with an identity link function, followed by post hoc Tukey's tests with a Type I error rate of <0.05.Nonparametric analysis using a contingency table was additionally employed to explore the positional association of foundresses with host species according to the host they developed on (we regard this analysis as illustrative rather than formal, see Table 2).Parametric survival analyses were used to identify factors affecting the timing of reproductive events: these were Weibull models with a time-dependent hazard function, with replicates that failed to attain a given developmental stage treated as censors (Aitkin et al. 1989, Crawley 1993, Zhang 2016, Malabusini et al. 2022).All statistical tests were 2 sided.Data were analyzed using the statistical software R (version 4.2.0), except for data in contingency tables that were analyzed using a χ 2 test in Prism GraphPad.
Foundress mortality
Of the total of 80 foundresses, 28 died during the observation period, and in all cases, dead females had been bitten into 2 parts, although it could not be distinguished with certainty whether it was the P. h.hilaris larvae host or another foundress that was responsible.Foundress mortality occurred in 16/40 replicates.The probability of an individual foundress dying was not influenced by the host on which it was reared (F 1,82 = 0.86, P = 0.35).
Timing of events
After presentation of the hosts, the mean time to host paralysis was 2.35 (SE ±0.36) days, the mean ovipositional time was 12.70 (+0.46, -0.44) days, and the total time to development (to the first emergence of an adult) was 48.97 (+0.746, −0.735) days.The overall time was calculated considering only those replicates in which S. brevicornis reached the egg stage, and replicates that did not reach the adult stage were treated as censors.The host species on which foundresses had been previously reared did not influence the timing of paralysis, oviposition, or offspring production (cohort survival analyses, with hosts that did not become paralyzed treated as censors; time to paralysis: χ 2 1 = 0.78, P = 0.38; oviposition: χ 2 1 = 0.73, P = 0.39; development: χ 2 1 = 0.23, P = 0.63).
Choice Test
Analyses were performed on data from 40 replicates, as 2 replicates in the treatment "4 Corcy" were excluded due to the death of females between the end of the first week of monitoring and the hatching of larval offspring (see Methods).
Foundress mortality
Some foundresses died during the observation period, and in all cases, dead females had been bitten into 2 parts, although it could not be distinguished with certainty whether it was the P. h.hilaris larvae host or another foundress that was responsible (as C. cephalonica larvae were pre-paralyzed they were not responsible for foundress death).However, all such deaths occurred before the P. h.hilaris hosts were paralyzed, suggesting that deaths were caused by defensive actions of the host.Within the first week of observations, 21 bisected foundresses (13.81% of the total) were counted.In 39.47% (15/40) of replicates, at least 1 foundress was killed during the first week; in 23.68%, 1 foundress was killed, and in 15.79%, 2 foundresses were killed.The probability of an individual foundress dying was influenced by the group composition: mortality was most common (25.00%) when foundresses originated from 2 different rearing systems, and least common (3.85%) when the 4 foundresses were siblings originating from the P. h.hilaris rearing system (Logistic GLMM with replicate identity fitted as a random factor: χ 2 2 = 7.63, P = 0.02, Fig. 3).
Egg production
Oviposition always occurred on at least one host in each replicate: in 42.5% (17/40) of replicates, eggs were laid on just one host and, among these, oviposition was more common the P. h.hilaris host, although not significantly so (N = 11, 64.71%; χ 2 test of goodness of fit: χ 2 1 = 1.47,P = 0.23).Eggs were laid on both hosts in 57.50% of replicates, and the probability of both hosts being used was not influenced by the type of foundress group (logistic analysis: χ 2 2 = 5.45, P = 0.07; note the marginal nonsignificance and that estimates of P-values obtained by logistic analysis are approximate rather than Foundresses were observed around twice as often on P. h.hilaris host larvae as on C. cephalonica, and there was no significant association between the host species that females developed on and current host preference (χ 2 test of 2 × 2 contingency table: χ 2 1 = 0.086, P = 0.77.Note that this analysis is pseudoreplicated and also that pseudoreplication tends to generate false significance: despite this, we find non significance).exact [Crawley 1993]: laying on both hosts was most common when 2 P. h.hilaris larvae were provided).Considering all the replicates (N = 40), the mean clutch size was 40.33 (SE ±3.50, range: 6-86, Table 1a).Clutch sizes were significantly larger on P. h.hilaris larvae compared with C. cephalonica larvae (GLMM with replicate identity fitted as a random factor: χ 2 1 = 436.63,P < 0.001).The total number of eggs laid per replicate was not significantly different between foundress group treatments (F 2,37 = 0.10, P > 0.05, Table 1a); nor was the number of eggs laid on each host larva (χ 2 2 = 1.27,P = 0.53, Table 1a).
Offspring production
Offspring were produced in 87.50% (35/40) replicates, and those offspring were produced from both hosts in 31.43%(11/35).The mean number of adult offspring per replicate (brood size at adult eclosion, combined across both hosts) was 24.30 (SE ±3.92, range: 1-93, Table 1b).Considering only the replicates where adults emerged from only one of the 2 larvae, the number of offspring produced on P. h.hilaris hosts was significantly greater than on C. cephalonica hosts (F 1,19 = 38.32,P < 0.001, Table 1b).There was no significant effect of foundress group composition treatment on the total number of adult offspring produced per replicate (F 2,37 = 1.73,P = 0.19), on adults produced from each P. h.hilaris larva (F 2,22 = 0.15, P = 0.86) or on numbers produced from each C. cephalonica larva (F 2,26 = 0.49, P = 0.62).Considering only replicates with offspring from only one host larva (N = 24), adult offspring production was not significantly affected by the host species utilized (χ 2 2 = 0.51, P = 0.78).
Sex ratio
Considering only the 35 replicates in which adult offspring emerged, the mean number of emerged males was 1.43 (±SE = 0.22) and ranged from 0 (20% [7/35] of replicates) to 6; no brood consisted entirely of males indicating that at least one foundress in every group had mated.The mean sex ratio per replicate was 0.06 (±SE =0.009) and was not significantly influenced by either foundress group composition (F 2,32 = 1.02,P=0.37) or by the species of host developed on (P.h.hilaris, C. cephalonica, or both: logistic analysis: F 2,32 = 1.74,P = 0.18).Sex ratios declined as brood size increased (F 1,33 = 6.44,P = 0.02; Fig. 4).
Timing of events
The time taken to oviposit (from the presentation of the hosts to the first egg being laid) on any one host varied between 6 and 30 days, with a mean of 10.78 (±SE = 0.57) days.Time to first oviposition was not significantly influenced by foundress group composition (cohort survival analysis, with hosts in which no eggs were laid treated as censors: χ 2 2 = 4.28, P = 0.12) or by current host species (χ 2 1 = 1.96,P = 0.16).The time to first oviposition was shorter when egg laying occurred on both hosts (12.30 ± 0.52 days) rather than on just one host (32.79 ± 0.75 days) in a replicate (cohort survival analysis with censors: χ 2 1 = 39.24,P < 0.001).Analyzing data from only those host larvae on which S. brevicornis broods reached the pupal stage (48/80), and considering each host individually, we found that developmental time (from the presentation of the host to adult offspring emergence) varied between 28 and 50 days (mean: 44.43, SE ±0.72) and was influenced by foundress group composition (cohort survival analysis, with replicates in which no adult emerged treated as censors: χ 2 2 = 11.11,P = 0.003; Fig. 5) and by an interaction between foundress group composition and host species (host species main effect: χ 2 1 = 0.16, P = 0.69; Host species × Group composition interaction: χ 2 5 = 11.75,P = 0.038, Fig. 6).Parasitoids matured most rapidly (44.70 ± 1.12 days) in replicates in which the 4 foundresses had developed on P. h.hilaris hosts.
Foundress position
We analyzed the positions of individual females at each observation time on each host.As there were multiple records per replicate, replicate identity was included as a random factor (but see Table 2).
We first considered foundresses' positions in terms of the hosts currently presented to them and the host species they had developed on.Across the entire experimental period, females were observed on P. h.hilaris larvae about twice as often as on C. cephalonica larvae (GLMM with replicate identity fitted as a random factor, χ 2 1 = 54.30,P < 0.001, Table 2), and there was no significant effect of foundress group composition (χ 2 2 = 1.38,P = 0.71, Table 2).Considering only the period between the presentation of the hosts and the first observation of eggs, females spent significantly more time (percentage of observations) on the P. h.hilaris larvae (χ 2 1 = 45.09,P < 0.001), and there was no effect of foundress group treatment (χ 2 2 = 3.79, P = 0.29).During the periods from oviposition to the first larval emergence (early brood stages) and from the first larval eclosion to the first pupal emergence (late brood stages), there were, again, significant preferences for the P. h.hilaris (Early: χ 2 1 = 33.43,P < 0.001; Late: χ 2 2 = 16.83,P < 0.001) and no effect of foundress group treatment (Early: χ 2 1 = 6.57,P = 0.08; Late: χ 2 2 = 3.20, P = 0.36).
We next considered females' positions in terms of the numbers of females on each host.We calculated the degree of deviation from the null expectation that the 4 females would occupy the hosts in 2 pairs of foundresses (an ideal free distribution, assuming hosts to be of equal quality).Wasps were in this distribution only 3.14% of the time (68/2,163 observations) and deviations were not influenced by foundress group treatment (logistic GLMM, including replicate identity as a random factor: χ 2 2 = 0.93, P = 0.63).For P. h.hilaris hosts, there were similar percentages of occurrence of single females (28.38%), 2 females (29.63%), and 3 females (28.73%) on one host (Table 3).For C. cephalonica hosts, females were most likely to be observed alone on a host (55.17% of observations, Table 3).
Foundress group treatment levels varied both in terms of foundresses developmental backgrounds and in terms of the relatedness of foundresses within groups.Using the data from replicates with 2 pairs of sibling foundresses, we explicitly considered foundresses' positions in terms of their relatedness to other foundresses by calculating a sibling aggregation score from each observation.This had a value of zero if none of the females were observed together on a host with their sibling, rising to 1 if all females were with their sibling.The overall mean was 0.244 (±SE = 0.016), indicating that in around one quarter of the observations, sibling foundresses were associated with each other on the same host.This is lower than the null expectation of 0.47 (assuming all spatial arrangements of the 4 wasps are equally likely), suggesting that siblings tend to dissociate from each other.
Foundress movements
During the entire observational period, from host presentation to pupation of offspring broods, females were observed to move between the presented hosts.The overall mean number of movements per female in a replicate was 1.43 (±SE = 0.13), with a maximum of 10 movements by an individual female.The total number of foundress movements within replicates were influenced by foundress group composition (log-linear GLMM with replicate identity fitted as a random factor, χ 2 2 = 7.66, P = 0.02), being most frequent when foundresses originated from the P. h.hilaris rearing system (Fig. 7).Assessing the movements of each individual foundress similarly showed that foundresses reared on P. h.hilaris moved more often (GLMM χ 2 1 = 7.94, P = 0.005).
Discussion
The present research provides information about the reproductive behavior and performance of S. brevicornis according to the species of host that the wasps developed on and the species of hosts they subsequently encounter.The longer-term consequences of rearing S. brevicornis on C. cephalonica were not assessed in previous studies, and thus there was no information on the likely effectiveness of female S. brevicornis reared on this host.The current study has begun to address this knowledge gap, albeit within the laboratory environment.However, laboratory assays can be useful to predict the field performance of mass-released parasitoids and to attune release rates accordingly (Bourchier et al. 1993).We found that the death of foundress females was a common occurrence and that dead females were bitten into 2 parts.Although foundresses may occasionally kill other potential foundresses by biting them into two (as reported in S. guani, Guo et al. 2023), it is common across several Sclerodermus-host associations that death occurs as a result of defense by attacked hosts (summarized in Abdi et al. 2020a).It seems most likely that the deaths we observed were principally due to the actions of hosts, rather than of other foundresses, as, in the choice test experiment, no foundress deaths occurred after hosts were paralyzed.The no-choice tests (presentation of a single "natural" host) indicated that the host that foundresses developed on did not influence the probability of a female being killed.In the choice test, we similarly found no difference in mortality between groups of foundresses that all originated from the natural host or all from the factitious host.Although these results suggest that there is no host-of-origin effect on the ability of S. brevicornis females to tackle P. h.hilaris larvae, in the choice test we also found that foundress mortality was highest when the females in a group originated from 2 different hosts and the form of Fig. 3 suggests that developing on C. cephalonica might be a disadvantage in future host attack.Another candidate explanation is that females with different origins recognized each other as nonsiblings and exhibited heighted interfemale aggression (as reported in S. guani, Guo et al. 2023), but, in terms of explaining high mortality, this runs counter to the possibility that foundresses in less closely related groups are less likely to take risks in host attack (as previously reported in S. brevicornis, Abdi et al. 2020aAbdi et al. , 2020b)).The propensity to take risks can be reflected in the timing of host attack (Abdi et al. 2020a(Abdi et al. , 2020b)), and our data indicated no effects of host-of-origin on the timing taken to attack and suppress hosts or on the time to oviposition.This not only suggests that higher mortality among mixed-origin groups is not due to kin-correlated risk-taking in host attack but also indicates that females are equally able to recognize P. h.hilaris larvae as hosts, and may be equally able to suppress them, whether or not they developed on a P. h.hilaris host themselves.These are key considerations in the use of factitious hosts for the production of biocontrol agents for field release.
Although foundresses took the same time to attack and oviposit on hosts of either species, the production of offspring on P. h.hilaris larvae in the no choice tests was considerably lower when foundresses had developed on C. cephalonica.This suggests that C. cephalonica hosts do not provide the same quality or quantity of nutritional resources as P. h.hilaris, leading to adults that develop on C. cephalonica having fewer teneral reserves (those available on maturation as adults), and in consequence a lower capacity to mature eggs.In the choice tests, we, in contrast, found no effect of host-of-origin on the numbers of eggs laid or the number of adult offspring ultimately produced.Clutch sizes (and, ultimately, brood sizes) were larger on P. h.hilaris than on C. cephalonica hosts (and brood sizes on C. cephalonica were broadly similar to those reported by Abdi et al. 2021).As P. h.hilaris larvae were an order of magnitude larger than the C. cephalonica provided, a difference in resource quantity coupled with clutch size adjustment (Hardy et al. 1992, Visser 1994, Zaviezo and Mills 2000, Haeckermann et al. 2007, Tang et al. 2014) provides a straightforward candidate explanation.Resource quality (biochemical/metabolomic composition) may also have differed, given the different diets of the 2 host species and that they belong to different insect orders.As well as the size of broods produced in parasitoid mass-rearing systems, the sex ratios of broods can be an important consideration for biocontrol potential: female bias is generally a positive attribute because only female parasitoids suppress hosts post-release (Ode and Hardy 2008).We found that sex ratios were extremely female biased.Although sex ratios varied according to brood size, the effect was not strong (as previously observed in S. brevicornis and many other members of the genus; Tang et al. 2014, Abdi et al. 2020a, 2020b, Guo et al. 2022, Lehtonen et al. 2023, Malabusini et al. unpublished data).
The distribution of foundresses across hosts indicated a clear preference for P. h.hilaris, and this was not influenced by the host species that foundresses had developed on.This is an encouraging result in terms of the use of factitious hosts for the production of fieldeffective biocontrol agents.We further observed that S. brevicornis commonly moved between hosts during the entire monitoring period.This may indicate that in nature foundresses females do not always remain with their own broods until the offspring complete development.Movements may be influenced by foundresses considering the availability or quality of resources provided by their current host and/or agonistic interactions between females (Guo et al. 2023).Our also suggest that sibling females tend to avoid each other rather than tend a brood together on a common host: further experiments are needed to tease apart factors that influence sibling-sibling interactions and foundress group formation in relation to expectations from kin selection theory (e.g., Thompson et al. 2017).In addition, females that had emerged from P. h.hilaris hosts moved more frequently between hosts.One possibility is that P. h.hilaris hosts are larger than C. cephalonica and wasps developing from these have a greater physical capacity for movement, for instance due to being larger or having greater energy reserves (Gao et al. 2016, Wang andKeller 2020).
In conclusion, prior studies have shown that S. brevicornis can be reared on C. cephalonica, but the subsequent performance and preferences of adults reared from this factitious host species were not evaluated.The current study suggests that S. brevicornis reared from C. cephalonica do have some disadvantages compared with those that have developed on P. h.hilaris but also that they nonetheless recognize, and indeed have a preference for, P. h.hilaris, which they are then able to produce offspring from.From the point of view of biocontrol, the ability to develop on a factitious host is convenient and a subsequent preference for, and ability to utilize, the target pest as a host is highly desirable.
Fig. 1 .
Fig. 1.Schematic representation of the starting point of each no-choice test replicate: a vial containing 1 Psacothea hilaris hilaris larva and 2 adult female parasitoids.
Fig. 2 .
Fig. 2. Schematic representation of the starting point of each choice test replicate using a 3-sector Petri dish, viewed from above.Two sectors contained a host (2 different species) and 4 adult female parasitoids (with varying developmental backgrounds and relatedness) were released into the third.Parasitoids were able to then move freely between sectors while each host remained within its sector.
Fig. 3 .
Fig. 3. Survival of Sclerodermus brevicornis foundresses during the first week according to foundress group composition (4Psaco = 4 females developed on the same Psacothea hilaris hilaris; 4Corcy = 4 females developed on the same Corcyra cephalonica; 2P+2C = 2 females developed on a P. h.hilaris plus 2 developed on a C. cephalonica).The standard errors around the means are asymmetric due to back-transformation from logit-scale estimates.Significant differences are indicated by different letters (Tukey post hoc test).
Fig. 4 .
Fig. 4. Sexual composition of broods: relationship between brood sex ratio and the size of the brood, line fitted by logistic regression.
Fig. 7 .
Fig. 7. Mean number of movements made by individual foundresses within replicates according to group foundress treatment ("4Psaco" = 4 females developed on the same Psacothea hilaris hilaris; "4Corcy" = 4 females developed on the same Corcyra cephalonica; "2P+2C" = 2 females developed on a P. h.hilaris plus 2 developed on a C. cephalonica).The standard errors around the means are asymmetric due to back-transformation from log-scale estimates.Significant differences are indicated by different letters (Tukey post hoc test).
Table 2 .
Summed frequencies of observations of individual foundresses being on hosts of each species
Table 1 .
Reproduction per host species provided and per replicate according to the hosts of origin of the group of foundresses
Table 3 .
Frequencies of foundress numbers observed per host | 2023-09-19T06:17:58.478Z | 2023-09-01T00:00:00.000 | {
"year": 2023,
"sha1": "5d077c93a1858c21e48e3cb21538da9ccb73867a",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/jinsectscience/article-pdf/23/5/7/51665632/iead046.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3a4cee5d3cdf5bf62a64cd859f22bfe7b730907f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
225489811 | pes2o/s2orc | v3-fos-license | Effects of Insulin against Aluminium Induced Neurotoxicity in Wistar Rats
Aluminium toxicity is well known to cause neurotoxicity leading to Alzheimer disease with dementia. The aim of the present study is to evaluate the effects of Insulin in Aluminium induced neurotoxicity. Thirty male wistar rats randomized into three groups (group V, D, T) of ten each were used for the study after obtaining institutional animal ethics committee approval. Chronic aluminium neurotoxicity was induced in the rats and the neurobehavior was evaluated using Morris water maze test, elevated plus maze test and rotarod test using standard methodologies. Group D rats exhibited significant deviation in performance of behavioural test of the study during day 1 (Morris water maze test18.6±9.5, elevated plus maze test34.9±1.9, rotarod test118.6±15.2) and day 30 (Morris water maze test64.5±4.6, elevated plus maze test72.1±3.9, rotarod test110.7±9.3). Rats of group T showed attenuation in behavioural changes induced by aluminium toxicity (P value: Morris water maze test-0.0002, elevated plus maze test0.0007 and rotarod test0.015). Insulin may play a role in neuroprotection against toxicity similar to that of aluminium induced neurotoxicity.
Introduction
Aluminium is a ubiquitous metal, which is potentially toxic to human. Aluminium (Al) accumulation has been implicated as a causative factor in a variety of disorders. Abnormally high amounts of the metal have been found in various neurological conditions; including dialysis encephalopathy, amyotrophic lateral sclerosis, Down syndrome, Parkinson's disease and Alzheimer's disease (AD). 1,2 Aluminium can readily cross the blood brain barrier after systemic administration and may use the same high affinity receptor ligand system that has been postulated for iron. Once in the brain, Al accumulates in various regions including the hippocampus where it can interfere with synaptic plasticity in a dose dependent manner. Application of different Al salts has generated neurofibrillary degeneration similar to that found with patients of dementia, 3 therefore Aluminium remains an environmental toxin that when accumulated in the brain in high amounts, can have devastating effects.
The full scale of mechanisms underlying Al neurotoxicity is likely to involve multiple pathways. It has been reported that Al may interfere with neuronal signaling through interactions with glutamate receptors or calcium channels and / or intracellular calcium homeostasis. 4,5 Considerable evidence has been provided for an interaction of Al with the cholinergic system. 6 The Al was described in relation with cholinergic transmission and signaling. The cholinergic system is also known to be particularly affected in AD and cholinergic signaling is crucially involved in learning and memory mechanisms. 7 Al can induce neuronal and glial cell death, 8 extensive loss of synaptic contacts, and can at least potentiate the deposition of aggregated beta-amyloid protein in the brain parenchyma and within the cerebro-meningeal vasculature, 9 which in turn can promote inflammatory events. Al can also interfere with axonal transport through binding of tau protein and other neurofilament peptides and the degeneration of neurofibrils in a tangle-like conformation. 10 Advances in the understanding of both the bioinorganic chemistry of Al and the biochemistries of tau and amyloid precursor protein (APP) have strengthened the link between Al and neurofibrillary tangles (NFTs) and senile plaques (SPs) from one of association to one approaching an etiology. 11 Aluminium has long been implicated in clinical conditions like senile and pre-senile dementia of Alzheimer's type. The AD is the most common form of dementia in the elderly and characterized histopathologically by extensive brain atrophy caused by neuron loss, 12 intraneuronal accumulation of paired helical filaments (PHFs) composed of abnormal tau proteins-neurofibrillary tangles, 13 and extracellular deposits of amyloid peptide (Aβ) in neuritic plaques 14 that are surrounded by a tract of neuroinflammation in specific regions of brain parenchyma including the cortex and hippocampus.
In addition to the neuropathologic lesions associated with AD, significant deficits in neurochemical functions and indices have been observed. Treatment with cholinesterase inhibitor drugs is currently the standard of care. 15 But the average durations of treatment and beneficial effects are not optimal in all cases, because of disappointing efficacy or poor tolerability of the initial treatment as well as secondary efficacy failure or adverse effects emerging during the maintenance phase. 16 Moreover, no treatment has been shown to significantly delay the progression of the disease. Therefore, efforts to identify novel approaches in the management of patients with AD are required. Also, Insulin/ insulin like growth factors was related to the increased oxidative stress, reactive oxygen or nitrogen species, and neuroprotection; causing similar AD type dementia. [17][18][19] The current study attempts to detect any improvement over presently available drugs using hither to less commonly tried therapies in AD. The aim of the present study is to evaluate the effects of Insulin in Aluminium induced neurotoxicity.
Animals
The present study was conducted in randomly selected adult Wistar rats of either sex (150-200gm, 6months). The animals were procured from the animal house of the institute. All the animals were housed in separate polypropylene cages (10 inch x 15 inch) containing 2 rats each in the departmental animal room under standard laboratory conditions of ambient temperature of 25+2°C, with relative humidity of 65+5%, and a 12-hour dark/light cycle. All the animals were allowed standard rodent pellet and tap water ad libitum. Each rat was used for experimentation only once. The experiments were performed between 10.00h and 13.00 h to minimize circadian influences. The Institute Animal Ethical Committee (1204/ac/08/CPCSE) has approved the study design.
Drugs, Chemicals, and Animal Treatment
Insulin was procured from Knoll Pharmaceuticals Ltd. Mumbai. Aluminium chloride was procured from Avantor Performance Materials India Ltd.
All the animals were acclimatized for one week following randomization and grouping. The animals were grouped in to three groups namely group V, D, T of ten animals each. The group V (normal control) rats received normal saline intraperitoneally for 30 days. The group D (disease control) rats received aluminium chloride intraperitoneally (10mg Al / kg body weight) for 30 days. The group T (treatment group) rats received aluminium chloride intraperitoneally (10mg Al / kg body weight) for 30 days along with insulin 0.2 IU/kg/d, intraperitoneal during 16th to 30th day. Before giving insulin, the animals will be given dextrose 600mg/kg, i.p. to avoid hypoglycaemia.
Aluminium induced neurotoxicity: An experimental rat model of aluminium accumulation in the brain was developed to aid in determining neurotoxicity of aluminium (Al). Aluminium chloride will be dissolved in distilled water to prepare 35 a solution of concentration 10 mg/mL. Al administered once daily by intraperitoneal injections of AlCl3 (10mg Al / kg body weight) for 30 days. 20
Behavioural Parameters
The behavioral tests were conducted during the five days (day 31-35) following the animal treatment. Morris water maze test was conducted on day 31-32, elevated plus maze test on day 33-34 and rotarod test on day 35.
Morris Water Maze Test
Morris water maze test was performed following previous method. Training in the maze was given for 5 days with one session of four trials each day to all rats in the study. The platform remained in the same place during all the training sessions. Training was followed immediately by test Session. The procedure during all subsequent test sessions was identical to the training. Escape latency, the time duration between the animal placed in water and escape to platform was recorded and evaluated. 21 Elevated plus Maze Test Elevated plus maze test was done as described in previous studies. 22 The elevated plus-maze test was used to evaluate spatial, long term memory, following the procedure described. Each mouse was placed at the end of an open arm. Transfer latency, the time taken by the mouse to move in to one of the enclosed arms, was recorded on the 1st day. An arm entry is defined as the entry of all the four feet of the animal into closed arm. If the animal did not enter an enclosed arm within 90s, it was gently pushed into one enclosed arm, and the Transfer latency was assigned as 90s.The mouse was allowed to explore the maze for 20s and was then returned to its home cage.
Rotarod Test
The muscle strength and coordination is evaluated. Rats were placed on the metallic rod (2cm) in diameter rotating at a rate of 20 revolutions per minute. Circular section divided the linear space of the rod in to 4 lengths so that 4 rats could be initially screened for their ability to maintain themselves on the rotating rod for more than 3 minutes. If the animal after treatment cannot remain on the rod for 3 successive trials of 3 minutes each, the test was considered positive i.e. motor inco-ordination was produced by the test compound. Rotarod performance was evaluated as fall-off time in seconds from the rotating rod (20rpm/min) with in a period of 3min .23 30th day of the study. (Figure 1) In this animal experimental design, insulin was screened for neuroprotective effects against Al induced neurotoxicity using behavioural parameters. This includes battery of tests to evaluate neurobehavioral effects of AlCl 3 administered intraperitoneal to the rats. The neurobehavioral toxicity has been screened by using a battery of neuropsychobehavioral tests evaluating all aspects of memory, like, spatial memory, conditioned response and working memory. The paradigms used in the study are Morris water maze, elevated plus maze, rotarod test. Spatial memory (a type of declarative memory) is evaluated by various models of which Morris water maze is considered to be the best model due to several advantages like food and water deprivation is not required in this test, water provides a uniform intra maze environment, thus eliminating any olfactory interference.
In various experiments Morris water maze has been successfully used for evaluation of anti-dementia and anti-amnesic drugs. The conditioning processes have been considered
Statistical Analysis
All results data was represented as mean ± SD. Difference between groups was calculated with one-way ANOVA followed by Post Hoc Scheffe's test. P value ≤ 0.05 was considered to be statistically significant.
Results and Discussion
Aluminium chloride (10mg Al / kg body weight, intraperitoneal) for 30 days was reported to induce neurotoxicity with aberration in memory and learning.20 Hence this was taken for screening 0.2 IU/kg/d insulin against this aluminium induced neurotoxicity.
Effect of Insulin on Morris water maze performance
The aluminium chloride treated group D rats showed increased escape latency on 30th day (64.5±4.6) in comparison to normal control group V rats (24.8±8.5) and the day one performance was comparable between group D and group V rats (P value= 0.54). Insulin treated group T rats (55.1±4.54) showed decrease in latency time (P value=0.0002) in comparison to the group D rats (64.5±4.6) on
Figure 1. Escape latency of Morris water maze test. SEC-seconds, GROUP V-normal control group, GROUP D-disease induced group, GROUP Ttreatment group. *-P value for comparison between group V and group D, #-P value for comparison between group D and group T.
intraperitoneal) for 30 days was reported to induce neurotoxicity with aberration in memory and learning.20 Hence this was taken for screening 0.2 IU/kg/d insulin against this aluminium induced neurotoxicity.
Effect of Insulin on Morris water maze performance
The aluminium chloride treated group D rats showed increased escape latency on 30th day (64.5±4.6) in comparison to normal control group V rats (24.8±8.5) and the day one performance was comparable between group D and group V rats (P value= 0.54). Insulin treated group T rats (55.1±4.54) showed decrease in latency time (P value=0.0002) in comparison to the group D rats (64.5±4.6) on 30th day of the study. (Figure 1) In this animal experimental design, insulin was screened for neuroprotective effects against Al induced neurotoxicity using behavioural parameters. This includes battery of tests to evaluate neurobehavioral effects of AlCl 3 administered intraperitoneal to the rats. The to be the basic element of learning. 24 This test has been included to evaluate the effect of Al salt on learning and memory.
Effect of Insulin on Motor Behaviour
The latency of fall was significantly decreased in group D rats (72.1±3.9) on 30th day of performance in comparison to group V rats (28.2±1.4). In comparison to group D rats, group T rats (60.4±8.26) showed increase in latency of fall on 30th day (P value= 0.01). (Figure 3) Aluminium chloride (10mg Al/ kg body weight,
Figure 2. Transfer Latency of Elevated plus maze test. SEC-seconds, GROUP V-normal control group, GROUP D-disease induced group, GROUP Ttreatment group. *-P value for comparison between group V and group D, #-P value for comparison between group D and group T
neurobehavioral toxicity has been screened by using a battery of neuropsychobehavioral tests evaluating all aspects of memory, like, spatial memory, conditioned response and working memory. The paradigms used in the study are Morris water maze, elevated plus maze, rotarod test. Spatial memory (a type of declarative memory) is evaluated by various models of which Morris water maze is considered to be the best model due to several advantages like food and water deprivation is not required in this test, water provides a uniform intra maze environment, thus eliminating any olfactory interference.
In various experiments Morris water maze has been successfully used for evaluation of anti-dementia and anti-amnesic drugs. The conditioning processes have been considered to be the basic element of learning. 24 This test has been included to evaluate the effect of Al salt on learning and memory.
Effect of Insulin on Motor Behaviour
The latency of fall was significantly decreased in group D rats (72.1±3.9) on 30th day of performance in comparison to group V rats (28.2±1.4). In comparison to group D rats, group T rats (60.4±8.26) showed increase in latency of fall on 30th day (P value= 0.01). (Figure 3) In this study, Aluminium chloride intraperitoneal administration (10mg Al/kg body weight) between 1st day of experiment and 30th day showed significant decrease in performance of behavioural activity including Morris water maze activity (18.6±9.5 to 64.5±4.6 seconds), elevated plus maze behaviour (34.9±1.9 to 72.1±3.9 seconds) and rotarod behaviour (118.6±15.2 to 110.7±9.3).
Figure 3. Latency of fall of Rotarod test. SEC-seconds, GROUP V-normal control group, GROUP D-disease induced group, GROUP T-treatment group. *-P value for comparison between group V and group D, #-P value for comparison between group D and group T.
This neurobehavioural toxicity was attenuated by insulin administration (0.2IU/kg/d, i.p) exhibited in all the three behavioural tests (P value=0.0002; 0.0007; 0.01). Overall motor behavior was not much affected in all the group rats.
Behavior can be defined as the end product of a variety of sensory, motor and integrative processes occurring in the nervous system. 25 The functional capacity of the central nervous system cannot be determined by histological or even physiological studies without behavioral analysis. 26 Neurobehavioral methods are being used with increasing frequency intoxicity studies to assess the deleterious effects of chemicals and physical factors on presumption that they are more sensitive than other tests in determining toxicity due to the fact that behavior is a functional indicator of the net sensory, motor and integrative processes occurring in the central and peripheral nervous system. Aluminium chloride affect the nervous system and produce effects which are manifested in the form of variety of symptoms and no single test can evaluate its neurobehavioral toxicity.
Various studies have been done evaluating the Aluminium induced neurobehavioral effects and morphological changes in the rat brain. 27,28 Animals loaded with aluminium develop both symptoms and brain lesions that are similar to those found in AD. The performance of the control group animals (Al treatment for 30days) at all the three-neurobehavioral paradigms was significantly lower than the vehicle group (normal saline treatment for 30days).The decline in cognitive function was assessed at 15days and 30days of treatment.
There was a constant decline in memory of Al treated animals.
Studies have confirmed that glucose administration can facilitate memory in healthy humans and in patients with Alzheimer's disease. Interestingly, glucose effects on memory appear to be modulated by insulin sensitivity (efficiency of insulin-mediated glucose disposal). 29 Biological data suggests that insulin may contribute to normal cognitive functioning and that insulin abnormalities may exacerbate cognitive impairments. In animals, systemic insulin administration has been associated with memory deficits, likely due, in part to hypoglycaemia that occurs when exogenous insulin is not supplemented with glucose to maintain euglycemia. Therefore, in this study insulin administration was preceded by fructose administration by half an hour to prevent peripheral hypoglycaemia. In the present investigation, the dose of insulin with fructose improved memory significantly even though the values are somewhat better than control.
Results of a recent study indicate a direct action of prolonged (8weeks) intranasal administration of insulin on brain functions, improving declarative memory and mood in the absence of systemic side effects. 30 Many studies are available which have evaluated the effect of insulin on the neuropathological mechanisms of memory deterioration, 31 but a few studies are there which have evaluated the effect of insulin treatment on retrieval of memory for a long duration of time. Studies, which have shown a positive effect of insulin administration on learning and retrieval, are short duration studies. Epidemiological evidence suggests that insulin resistance influences the risk of developing AD. 32,33 The increasing evidence of insulin resistance in AD and other numerous mechanisms through which insulin may affect clinical and pathological aspects of the disease suggests that improving insulin effectiveness may have therapeutic benefits for patients of AD. 34 The present work has shown that exogenous insulin helps in attenuation of aluminium chloride induced neurotoxicity. Chen Y et al.,
Limitations
This study would have been even screened in diabetic rat models which would give more information related to diabetes association with neurotoxicity and neuroprotective role of insulin. Because of practical limitations, biochemical and histological studies were not possible to perform in this study. Only single dose has been evaluated in this study, further studies may be required to explore the activity of larger doses.
Conclusion
The present study revealed the role of insulin in neuroprotection against chronic exposed aluminium induced neurotoxicity. This may be further explored in other models and provide substantial evidence to aid the susceptible diabetic population. | 2020-08-13T10:05:21.068Z | 2020-08-12T00:00:00.000 | {
"year": 2020,
"sha1": "862efa7abdc499060abe6b92e7defb7a415dc367",
"oa_license": "CCBYSA",
"oa_url": "http://jurnal.unpad.ac.id/pcpr/article/download/28096/62-71",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5446e9e3ae91a69cbc3b019e902783dd3447cb66",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
249087962 | pes2o/s2orc | v3-fos-license | The Outcome of Surgical Intervention (Ventriculoperitoneal Shunt and Endoscopic Third Ventriculostomy) in Patients With Hydrocephalus Secondary to Tuberculous Meningitis: A Systematic Review
The objective of this study is to analyze the outcome of the safety and efficiency of the surgical interventions (ventriculoperitoneal shunt [VPS] and endoscopic third ventriculostomy [ETV]) in patients with hydrocephalus due to tuberculous (TB) meningitis. A systematic literature search has been conducted using PubMed, Google Scholar, PMC, and ScienceDirect databases from 2001 to 2022 April. A total of 16 studies have been included, irrespective of their design. These studies include patients diagnosed with hydrocephalus secondary to TB meningitis (TBM) treated with VPS or ETV. A systematic review was conducted to determine the efficiency of surgical procedures based on the outcomes and complications associated with these procedures. A total of 2207 patients (aged one month to 68 years) have been included in this study, out of which 1723 underwent VPS and 484 underwent ETV. The overall success rate in the VPS group varied from 21.1% to 77.5%. The overall success rate in the ETV group ranged from 41.1% to 77%. The overall complications rate in the VPS group varied from 10% to 43.8%, and the complications rate in the ETV group varied from 3.8% to 22.5%. After ruling out the significant differences in the average percentages of outcomes and complications followed by VPS and ETV, ETV is suggested in patients with chronic phases of illness because the chances of ETV failure are high during the initial stage. The uncertainty of the ETV gradually decreases over time. To attain favourable long-term outcomes with ETV in patients with TBM hydrocephalus (TBMH), ETV should be performed after chemotherapy, anti-tubercular treatment, and steroids. In addition, ETV is considered beneficial over VP shunt as associated long-term complications are significantly less compared to VP shunt. In contrast, VP shunt is suggested as a modified Vellore grading which shows a more favourable outcome in patients with acute illness than ETV.
Introduction And Background
Tuberculous meningitis (TBM) is a bacterial infection of the central nervous system involving the meninges of the brain and spinal cord. Mycobacterium tuberculosis is the causative organism of TBM. Hydrocephalus is the most common complication of TB meningitis, affecting children more than adults [1]. It is almost always present in patients who have had the disease for four to six weeks and occurs at an early stage of the disease process [1]. The hydrocephalus in patients with tuberculous meningitis could be either the communicating type or the obstructing type, the former being the more common [2]. The developmental issue of the obstructive type of hydrocephalus in tuberculous meningitis is either due to blockage of the fourth ventricle by thick exudates or leptomeningeal scarring [3]. The early stage of this communicating type of hydrocephalus causes thick gelatinous exudates to block the subarachnoid spaces in the base of the brain (more significant in the interpeduncular and ambient cistern). The later stage of the communicating type of hydrocephalus causes the exudates, which leads to dense scarring of the subarachnoid spaces. Communicating hydrocephalus may also result from an overproduction of CSF or secondary to reduced absorption of CSF. Communicating hydrocephalus is seen more recurrently in patients with TBM [3]. According to body weight, the medical management of TBM hydrocephalus (TBMH; communicating type) includes ATT (standard four-drug antitubercular therapy consisting of rifampicin, ethambutol, isoniazid, and pyrazinamide), along with steroids (dexamethasone given if CT showed thick basal exudates and there was evidence of infarcts) [2], and dehydrating agents acetazolamide, furosemide, and mannitol [1]. The surgical management of TBMH includes endoscopic third ventriculostomy (ETV) and ventricular shunting 1 2 1 1 1 (VA, VP, VPL, LP), most commonly ventriculoperitoneal (VP) shunting, which has been the procedure of choice so far [4]. Attempts to relieve pressure symptoms in infants with enlarged heads and adults with papilloedema and high lumbar cerebrospinal fluid (CSF) include cerebellar decompression, lateral and third ventriculostomy, and short-circuits between the ventricular system and subarachnoid space of the cerebral hemispheres [5]. However, the best plan to relieve the communicating hydrocephalus is to persist with intrathecal and systemic streptomycin [5]. High cerebrospinal fluid protein levels delay shunting.
Nevertheless, ventriculoperitoneal shunt (VPS) surgery complications in patients with TBMH are high, with frequent shunt obstructions and shunt infections requiring repeated revisions [4]. Therefore, the clinical grading system determines the patient's treatment strategy [3]. The most commonly used system is the Vellore grading of TBMH (Table 1), proposed by Palur et al. [6]. Alongside, Table 2 briefly discusses modified Vellore grading of patients with TBMH.
Review Methodology
The Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines 2020 were followed in this systematic review [8], and the population, intervention, comparison, and outcome (PICO) format was included in this study pattern.
The eligibility criteria of the studies in our survey can be found in Table 3.
Information sources, search strategy and data extraction process
A systematic literature search has been conducted using PubMed, Google Scholar, PMC, and ScienceDirect databases using the relevant keywords and MeSH strategy mentioned below ( Table 4). A total of 16 studies have been included irrespective of their design and having been diagnosed with tuberculous meningitis and treated with VPS surgery or endoscopic third ventriculostomy (ETV). Two researchers worked independently to identify and extract the data. Quality assessment of each study is conducted using appropriate quality appraisal tools (NOS -Newcastle Ottawa Assessment Scale for Prospective and Retrospective Cohort Studies and Critical appraisal guide for Systematic Reviews (randomised studies) from April 21 to 30, 2022. After removing all the duplicates manually and via Endnote, the author's inclusion and exclusion criteria were used to evaluate the study. All the irrelevant studies have been omitted. The third author resolved the differences of opinion between the first two authors. After a complete analysis, 16 articles have finally been considered in this review.
The purpose of the study is to contemplate the outcome, safety, efficiency of surgeries (VPS and ETV), and complications of patients who underwent either ventriculoperitoneal shunt or endoscopic third ventriculostomy. The efficiency of procedures is based on the resolution of signs and symptoms and also on Vellore grading of patients with TBMH.
The search strategy of different databases using relevant keywords and MeSH strategy is summarised in Table 4.
Quality Assessment
Quality assessments of the reviews have been performed based on the guidelines mentioned below. In addition, articles that met at least 70% of the criteria have been included.
We followed the guidelines of the Newcastle Ottawa Assessment Scale for prospective and retrospective cohort studies: (1) Was the exposure and outcome of interest clearly explained? (2) Exposed people? (3) Nonexposed people? (4) The outcome of interest not present at the start of the study (5) Were the people similar? (6) Were the exposure and outcomes measured the same way? (7) Was the follow-up done correctly? (8) Was the follow-up long enough and sufficient enough? (9) Was this study published in an indexed journal? Outcome-based on: YES or NO.
The critical appraisals for systematic review are as follows: (1) Aim of the research; (2) Keyword explanation; (3) MeSH strategy; (4) Did the authors describe all the databases they used to collect the data? (5) Inclusion and exclusion criteria; (6) Did the authors check the quality (critical appraisal) of each study they included in the article? How did they critically appraise it? (7) Is the article published in a reliable database? (8) Were multiple authors involved in data extraction and quality appraisal? (9) Cochrane risk of bias assessment tool. Outcome-based on: YES, PARTIAL YES, NO.
Risk of Bias
The risk of bias in the considered studies has been briefed in Table 5. Studies of patients with TBMH who underwent either VPS or ETV can be found in Table 6.
S.no Study
Year of
Outcomes
Results for the patients with TBMH who underwent ETV based on the outcomes of success rate and complications can be found in Table 7.
Interpretation
The average follow-up period in the various studies mentioned above varied from one month to five years. The average outcome success rate of the ETV procedure in the studies mentioned above is 61.8%. However, the complication rate of the ETV procedure varied from 3.84% in the study of Aranha et al. to 16.75% in the study of Goyal et al. [3,4,10,13,17,20]. The complication rate of ETV commonly includes CSF leak, perioperative bleed, blocked stoma, the bulge at the ETV site, and meningitis.
Results for the patients with TBMH who underwent VPS based on the outcomes of success rate and complications can be found in Table 8.
Number of patients (n)
Age of the patients
Interpretation
The average follow-up period in the various studies mentioned above varied from two weeks to six years. The average outcome success rate of the VPS procedure in the studies mentioned above is 57.82%. GOS (Glasgow Outcome Scale) and Vellore grading were outcome measures used by a few studies, and some studies used either death or disabilities to determine the outcome. The overall complication rate of the VPS procedure varied from 10% in the study by Kankane et al. to 43.8% in Sil and Chatterjee et al. [3,4,9,14,[18][19][20][21][22]. The common complications in VPS patients include shunt infections, shunt obstructions, intraventricular haemorrhage, and multiple shunt revisions.
The preoperative and postoperative CT brain scans of a patient with TBMH who underwent VPS can be found in Figures 2-4.
Discussion
Hydrocephalus is the most frequent complication of TBM and is profoundly more common in children than in adults. Our study comprised 2207 patients with TBMH who underwent either VPS or ETV. Although various studies determined the efficiency of the surgical intervention based on the clinical outcomes and complications, the indications and timing of VPS and ETV were not steady across the studies. In our study pattern, success rates of ETV in patients with TBMH varied widely from 41.1% to 77% [3,4,[10][11][12][13]16,17,20]. The complication rate in ETV varied from 3.8% in the study of Aranha et al. [4] to 22.55% in Yadav et al. [3,4,10,13,17,20]. The common complications in patients who have undergone ETV include CSF leak, perioperative bleed, blocked stoma, bulge at the ETV site, and meningitis. The presence of advanced clinicalgrade, extra CNS TB, dense adhesions in the prepontine cistern, and unidentifiable third ventricle floor anatomy leads to the failure of ETV [12]. Complex hydrocephalus and associated cerebral infarcts are significant causes of failure to improve after ETV [17]. Results of ETV were better in patients without cistern exudates, good nutritional status, and a thin and identifiable floor of the third ventricle. ETV should be better avoided for acute hydrocephalus in patients with tuberculous meningitis and should be reserved for those who have been on ATT for at least four weeks or those in the phase of chronic burnout and hydrocephalus has developed late [1]. Some authors suggested ETV as worth trying before subjecting the patients to VP shunt as it showed good results in both communicating and obstructing hydrocephalus [4,11]. Few studies regarded ETV as the first choice of management in patients with TBMH despite high CSF cell counts, protein levels, and indistinct third ventricular floor anatomy [4]. On the other hand, a few studies suggested ETV as the first management choice and considered VP shunt and EVD in patients with failed ETV based on the clinical-grade [10]. Thus, there has been a lack of uniformity in the indications for performing endoscopic third ventriculostomy (ETV). On the other hand, success rates of VPS in patients with TBMH have varied widely from 21.1% to 77.5% [3,4,9,14,15,[18][19][20][21][22]. The complication rate in VPS varied from 10% in the study of Kankane et al. [21] to 43.8% in Sil and Chatterjee [3,4,9,14,[18][19][20][21][22]. The common complications in VPS patients include shunt infections, shunt obstructions, intraventricular haemorrhage, multiple shunt revisions, abdominal CSF collections like pseudocyst, subdural hematomas, skin erosions, pneumonia, and meningitis. One of the studies reported that shunt-related complications occurred in four patients and consisted of an obstruction at the lower end of the shunt in three cases, leading to revision. One patient had an infection at the shunt chamber site, leading to skin excoriation and meningitis [20]. A few studies reported that 15.8% of patients expired in the second and fourth postoperative weeks, respectively; among those who had undergone VPS placement, 21.1% of patients had a full recovery without sequelae, and the other 63.2% of patients survived with various sequelae, including paralysis, impaired vision and hearing, mental retardation, and epilepsy [18]. Rizvi et al. suggested that VPS outcome depends upon the clinical severity of TBMH and holds an unpleasant prognosis in HIV-infected patients compared to HIV-uninfected patients [22]. Srikantha et al. suggested a VP shunt as the first choice of management for grade 4 patients with hydrocephalus and recommended it for patients who do not improve with an EVD [15]. A few studies have suggested early VP shunt in patients with non-communicating hydrocephalus [9]. Prognostic factors to rule out the outcome of shunt surgery include the age of the patient, duration of altered sensorium, CSF cell count, and CSF protein levels. However, ETV has the theoretical ascendancy over VPS in enabling the CSF to circulate through the previously inaccessible areas of the brain, which can generally absorb cerebrospinal fluid. ETV also avoids lodging a foreign body in the form of a shunt, hence avoiding complexities like shunt infection, blockage, and abdominal pseudocyst formation [1].
Limitations
The study scale of the ETV group is small compared to the VPS group, and the data extracted from an adult population are inadequate to define any conclusion. In addition, there is a significant shortage of information regarding the follow-up longevity, which might help determine the long-term outcomes and complications of the VPS and ETV procedures and the timing of procedures in patients with TBMH. Finally, apart from the former concerns, there is limited access to the data, and the methods of the studies could be more specific in a better way.
Conclusions
After much interpretation, it is suggested that clinical grading of the patients is a basic and effective method to determine the management of TBMH. Moreover, after ruling out the significant differences in the average percentages of outcomes and complications followed by VPS and ETV, ETV is suggested in patients with chronic illness because the chances of ETV failure are high during the initial phase. However, the uncertainty of the ETV gradually descends over some time. Therefore, to attain favourable long-term outcomes with ETV in patients with TBM, ETV should be performed after chemotherapy, ATT, and steroids. In addition, ETV can be beneficial over VP shunt because it requires fewer incisions, associated long-term complications are significantly less than VP shunt, and there are no implanted foreign bodies. In contrast, VP shunt is suggested in the acute phase of illness as patients in modified Vellore grading show favourable outcomes compared to ETV.
Conflicts of interest:
In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2022-05-27T15:17:58.245Z | 2022-05-01T00:00:00.000 | {
"year": 2022,
"sha1": "fcc0a84e0dceeae2228d6891827d042520ac7dbf",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/97929-the-outcome-of-surgical-intervention-ventriculoperitoneal-shunt-and-endoscopic-third-ventriculostomy-in-patients-with-hydrocephalus-secondary-to-tuberculous-meningitis-a-systematic-review.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "601034ce77bfb95728c9859f0561e53a64c22bc9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
225976031 | pes2o/s2orc | v3-fos-license | Reclaiming the Lost Glory of Home : A Post-Colonial Study of The Selected Works of Abdullah the Cossack
The aim of the present study is to explore the notion of reclamation of the lost grandeur of home in the symbolic representation of the city Karachi in the novel The Selected Works of Abdullah the Cossack (2019) written by H.M Naqvi. The textual analysis uses the theoretical framework taken from Homi K Bhabha's postcolonial concept of 'unhomeliness'. The Selected Works of Abdullah the Cossack (2019) is selected as a frame of reference for the purpose of study. The aim is to see how literature has hailed the past glories of the city and it is concluded from the analysis that through personification, symbolism, cultural markers and language used the author tries to regain the nostalgic glory that was once associated with the city and which thus provided the sense of pride and belongingness as home
Introduction
The objective of the present study is to highlight the motif of reclamation of the lost glory of cultural grandeur, heritage and traditions of home city; Karachi as portrayed in Naqvi's The Selected Works of Abdullah the Cossack (2019). The study also aims to explore the different literary devices and techniques used to portray the notion of reclamation of lost grandeur of the city. The study also aims to throw light on the historical perspective of Karachi by revisiting the past. The frame of reference of the current study is a novel, The Selected Works of Abdullah the Cossack(2019) written by H.M Naqvi. The rationale behind the study is to reclaim the past grandeur of the city through literature. The loss of cultural values and the mystical grandeur of the city Karachi reflects on the widespread environmental and cultural deterioration of the present day and it is but natural for us to seek a revival of those by delving into the past achievements that connected us with not just our values and culture but also provided a rationale for us to cling to the idea of belonging to a peaceful proud place called home. For instance the architectural decay and functional changes of the heritage buildings in Karachi have also caused a loss of the historical grandeur of the said structures (Soomro, Kanwal &Soomro, 2019). The result is therefore pathos and a nostalgia for the revival of the lost cultural glory.
The Selected Works of Abdullah the Cossack (2019) is based on protagonist Abdullah who is a personification of one of the liveliest metropolitan areas of the world, Karachi. So the obvious setting of the novel is the Karachi. Abdullah, a septuagenarian wakes up on his seventieth birthday and goes to his lodge's balcony. The story starts with his launch in the balcony of his lodge. He spent his life compiling the Mythopoetic Legacy of Abdullah Shah Ghazi (RA). After years of expenditure on compiling the Mythopoetic Legacy of Abdullah Shah Ghazi (RA) feels nostalgic about the jazz of the city and some real life anxieties make him lose zeal in a big city like Karachi. He feels his life without love, purpose and meaning. Therefore, the story proceeds with the quest for love in the figure of Juggun, purpose in retrieving the 'home', an ancestral legacy prevented from being sold out and, meaning by mentoring a young lad, Bosco (grandson of a friend), and writing about the forgotten talent of Layari, Rambo (flyweight).
Thesis Statement
The novel The Selected Works of Abdullah the Cossack(2019) delves profoundly into the nostalgia for the past. It has depicted the past magnificence of the city along with the portrayal of the present day deterioration of socio-cultural values of Karachi. The novel is used as a medium to reclaim the bygone majesty of the city based on its mystical and cultural past since the decay of heritage, historical buildings, destruction of environment all lead to the decay of socio-cultural values as well. Different tools and literary techniques are used to portray the motif of retrieving the lost splendour of the city. Literature is a medium through which the intellectuals bring forth the discourses which need contemplation and attention about the un thoughtful scenarios. Similar is the effort seen in H.M Naqvi's novel which hails the glories of Karachi which has been lost somewhere in the megalopolis transition. The main focus of the study is to trace and delineate the motif of reclamation of the lost grandeur of the city portrayed in the novel through different techniques and literary devices.
Research Questions
1. How is the lost socio-cultural glory of Karachi as home reclaimed in the novel?
2. Which literary devices and techniques unfold the motif of reclamation of lost grandeur of the city/ home?
Unhomeliness and The Repressed Histories
Homi K Bhabha is one of the most important theorists in postcolonial studies. He has coined many concepts like hybridity, liminality, mimicry and ambivalence. Among these concepts, he introduced the concept of 'unhomeliness'. The concept of unhomeliness is different from homelessness. Unhomeliness does not mean homelesssness. It means the physical presence of home and being physically at home but not being able to feel like being at home. According to Bhabha, 'the unhomely moment creeps up stealthily as one's own shadow' and suddenly one finds him/herself 'in a state of incredulous terror' (2012). It is 'a condition of extra-territorial and cross-cultural initiation' and a 'relocation of the home and the world' i-e 'a shock of recognition of the world-inhome, the home-in-the-world' (Bhabha, 2012). Furthermore, Lois Tyson elaborates that being 'unhomely' is different from being homeless.
Bhabha's notion of unhomeliness echoes with the Freudian concept of 'uncanny'. Freud used a term, 'unheimlich' meaning unhomely. Huddart (2006) elaborates the Bhabha's borrowing of word 'uncanny' from Freudian viewpoint, the uncanny contains the opposite, 'if the canny is the home it none the less has a tendency to morph into the profoundly unfamiliar, the 'unhomely', which alienates and estranges us from what we thought was most properly our own. 'Living with uncanny ability to live at home is an ability that might always become a burden of having no home'.
Freud argues that 'for this the uncanny is in reality nothing new or foreign, but something familiar and old-established in the mind that has been estranged only by the process of repression' (Suyoufie, 2005). According to Freud 'any repression is necessarily incomplete, and so any past is always just about to break through into the present. For psychoanalysis the traces of past beliefs and experiences remain present in the mind and Freud called it 'omnipotence of the thoughts'. Uncanny arises from the 'repression of our supposedly primitive beliefs'. These 'repressed histories' is a sense of belonging that comes back to consciousness and questions the present. This unhomely echo of histories that modernity might have preferred had remained hidden (Huddart, 2006). 'The memory that survived from the past exists in a fractured, discontinuous relationship with the present' (McLeod, 2000). A balance through the uncanny can be attained by 'gathering the memories of underdevelopment, gathering the past in a ritual of revival; gathering the present.' Bhabha said uncanniness is the repetition of 'lived life introducing difference and transformation'. Furthermore, this difference in repetition 'is a way of reviving the past, of keeping it alive in the present' (Bhabha, 2012).
Bhabha then further elaborates that 'the private (self) and public (other), past and present, psyche and social develop an interstitial Intimacy. These spheres of life are linked through an in-between temporality that takes the measure of dwelling at home, while producing an image of the world of history'. It means the connection is uncanny one. This in-between space is a communal space which leads to the exploration of 'interpersonal reality, aesthetically distanced, held back, and yet historically framed'. Goethe says, one cannot go back to the past because now he/she has 'settled...without noticing' that he/she 'had learned many foreign ideas and ways' which had been 'unconsciously adopted' (Bhabha, 2012). Bhabha (1992) further argued that the 'act of writing the world' must be 'fully realised presence of haunting' history. He said that 'in the House of Fiction one can hear the deep stirring of unhomeliness'. 'The disjunction between past and present, between here and there, makes home seem far removed in time and space, available for return only through an act of imagination' (McLeod, 2000). This act of imagining the past, is then written down representing the theme of unhomeliness. Moreover, modernity is a repression of origins leading to an unhomely state in the characters of fiction. Bhabha connected the unhomely state to cosmopolitanism and called it as 'vernacular' cosmopolitanism, which opens 'ways of living at home abroad or abroad at home' (Bhabha, 2012).
Material and Methods
The researcher has used Homi K Bhabha's notion of 'unhomeliness' to shed light on the reclamation of the lost glory of Karachi depicted in the novel through different literary techniques and devices. The purpose is attained by using textual analysis proposed by Catherine Belsey. According to Besley, textual analysis 'involves a close encounter with the work itself, without bringing to them more presuppositions'. The text is to be read first which is to be followed by the questions of the researcher. The text always has a relation with the context and understanding this relationship helps to pose meaningful questions. The meaning of the text is embedded in multiple layers forming a complex relation. The process of 'signification' i-e the relation between 'signifier' and 'signified' is to be analysed and evaluated using sources of knowledge (theories) by looking profoundly into the literary techniques employed by the authors. In this way, the implicit meanings manifested through literature are unearthed (Griffin, 2005). The Selected Works of Abdullah the Cossack (2019) by H.M Naqvi is the novel which is chosen as a frame of reference to reclaim the lost glory of mage city Karachi.
Reclaiming of Lost Home in The Selected Works of Abdullah the Cossack
"The great Pakistani city of Karachi, says the titular narrator of H.M. Naqvi's The Selected Works of Abdullah the Cossack (2019), was once a cultural mecca . . .That boisterous entrepot is long since gone…yet it survives in the memory of the novel's aging hero, and in this delirious love letter to the Karachi that was…Thrust into wheezing, hobbling action, Abdullah protects his friends and confronts his adversaries with a boldness and verbosity" (Sacks, 2019).
The protagonist Abdullah of the novel is a personification of the city Karachi as home. 'He is a self-educated, self-styled academic...trying to begin some sort of project pertaining to aspects of intellectual history'. He broods on the lost jazz of the city by 'mulling over the mythopoeic Legacy of Abdullah Shah Ghazi'. He used to be at the shrine of Abdullah Shah Ghazi on every 'Thursday night inhaling hashish amongst...fortune tellers, body builders, thugs, troubadours, transvestites, women & sweet, rowdy children. I am at home there' (TSWOATC). The 'fatiha and quls' recitation at funerals, 'positioning the corpse' projecting tents, 'the chairs arranged in rows, the audio system setup for the presiding maulvi' and attending 'condolence calls' gives an insight into old Karachi. The 'drums sound in the compound' at every evening in Sehwag where there was 'the seat of greatest saints, Lal Shahbaz Qalandar, 'commemorated Muharram' where the inhabitants pervade their houses 'with the sweet scent of burning incense sticks' was also drawing a glorious picture of Karachi. The commemoration of different sufi saints 'with song and dance until the day breaks' and 'the qawwals' singing 'modern masterpiece...Sakhtmushkilmeinhain, gham se haray hue, viz., We are in trouble Lordy, defeated by despondency' (TSWOATC)at the shrines gives a glimpse of old glorious Karachi.
The past of Karachi is reclaimed through different strands in the novel. Symbolism is one of the major strands in the novel for the reclamation of past glory. The Mythopoetic Legacy of Abdullah Shah Ghazi (RA) symbolises the old Karachi with its glories depicted by a seventy-year old septuagenarian man known as Abdullah the Cossack. Abdullah mentions in the beginning that he is, 'phenomenologist than a historian, less concerned with who did What & What Happened When than with the more discreet, indeed noble investigations-nary the chotachota but the motamota. For instance, the mythopoeic Legacy of Abdullah Shah Ghazi (RA), the patron saint of my city is one of the matters hitherto ignored by historians, pundits and punters alike, suggesting a variety of perversity that eclipses the newsworthy issues that vex the denizens of our Broad Swath of the World'. It is as if in this savage, insensible, this distracted age we have become obsessed with anecdotal indicators, hermeneutic lint, ignoring What Makes Us What We Are, indeed, What Makes the World What It Is'. (TSWOAT) Abdullah symbolises the old generation, who is nostalgic about the past glories of his city. It is this nostalgia, the absence of the past, which gives him a sense of 'unhomeliness' in the new Karachi 'home'. At every sight of new Karachi the old glories creep within his consciousness thus reclaiming the past grandeur of Karachi. The 'architectural delights scattered across the old city' has been levelled by land mafia which inculcates a sense of 'unhomely' in Abdullah. The transformation of 'Three Star Accommodation' into Hotel Grand and Gandhi Garden's alteration hovers a sense of 'unhomeliness' in Abdullah.
Abdullah's quest for 'mulling over a project, some permutation of the Mythopoeic Legacy of Abdullah Shah Ghazi (RA)' is highly symbolic. Everytime he is in any kind of danger or even on a death bed Abdullah is reminded of the sublime project of writing the Mythopoetic Legacy of Abdullah Shah Ghazi (RA). When he has dengue fever, he broods on a thought that, 'there is much to be done: securing patrimony, matrimony, Bosco's security, The Mythopoetic Legacy of Abdullah Shah Ghazi'. (TSWOATC) These two purposes are central to the character of Abdullah. Abdullah while acting as a mentor for 'character building' symbolises his present where he has to survive and writing an intellectual Legacy on Abdullah Shah Ghazi symbolises the past which is more glorified and homely to him thus protecting him from feeling 'unhomely'. Every time he is in his present the 'unhomely' feelings creep in and he goes back to his past to negotiate in the new Karachi. Before going, Layari, Abdullah visits the shrine of Abdullah Shah Ghazi (RA). Bosco asks about the shrine and its importance, he replies that 'Five men, good men, honourable men, brothers, settled here a millennium ago...Abdullah Shah Ghazi (RA) was one of them. We protected him and now he protects us' (TSWOATC). It is a protection from being soaked in the present lost grandeur of the city. It helps Abdullah to cling to the past and relieves the unhomely state of present.
This sense of reminiscing the past and its glories protects him from being lost and helps him to negotiate in the new Karachi. For Abdullah, reviving the past is the only way to negotiate in new Karachi. At the end of the novel, Abdullah's visit to the shrine is very symbolic as it shows that he satisfies his 'unhomely' feelings only by revisiting the past glorious Karachi of Abdullah Shah Ghazi (RA). His visit with the family to the 'Uncle Jinnah's Mausoleum' is a revival of past, Karachi was the city which had inhabited by all the leaders after migration from India including Quaid e Azam Muhammad Ali Jinnah.
Abdullah recognises 'the-world-in-home' and Boscorecognises 'home-inthe-world'. There is a juxtaposition of past and present, old and new generation in the manifestations of these two characters. Abdullah, symbolic of the old generation trying to negotiate in new Karachi is juxtaposed to Bosco, who is symbolic of the new generation also trying to negotiate in new Karachi.
The polarization between old and new generations symbolising old and new Karachi bears the concept of 'homeliness and unhomeliness'. The unhomely feelings of both the characters; Abdullah and Bosco, are commonly shared thus connecting them in a perfect relationship of mentoring and ease. Abdullah succeeded in negotiating the unhomely feelings in Karachi by revisiting the past, visiting the shrine of Abdullah Shah Ghazi, writing about the 'Mythopoetic Legacy of Abdullah Shah Ghazi' and 'forgotten heroes of Layari'. The unhomely feelings are overcome by Abdullah through his writing. He strongly realises Layari as 'motherland of many heroes' which now has become a motherland of 'land mafias'. He writes about one of the forgotten heroes, Rambo of Layari in a magazine to revive the glory of heroes and land. Bosco, on the other hand, succeeded in negotiating the unhomely feelings by keeping 'yo-yo' with himself and towards the end of the novel leaves Karachi and settles abroad (Australia).
Yo-yo symbolises the change in the character of Bosco. The change that lies in the preferences of Abdullah and Bosco on one level and on the other level refers to the change that has occured in Bosco particularly. Bosco comes back to Karachi after a long time with his family, he finds a change and variation within himself due to Uncle Cossack's words, he admits 'I was displaced as a teenager, floundering. Uncle Cossack took me under his wing and guided me as best he could. It's rare in this 'savage, insensible, distracted age' for somebody to just care.' The home is reclaimed through language as well. Abdullah inculcates Gujrati, 'tamaykaimcho? You okay?' on his visit to Layari. Abdullah has this taste of using urdu language to convey the complete essence of the message. Once a fortune-teller tells him that 'tum lambi race kayghore ho. You are the horse of long race' to give a tinge of true essence of the phrase. More intimately on the word level the language is used to create an impact of glorious past like 'Radi-wallah', 'Sayein', 'Horki al eh', 'razibazi' and 'Ninni time, bachon!' 'Anokhaymian, Master unusual'.
Moreover, Abdullah once reveals that even the 'children's nursery rhymes' has become 'uncanny'. 'The last time I was over, the childoos would not stop singing that ode to Marine life: Machlijalki rani hai/jeevanoskapanihai/ Hath lagaogey, darjayegi/ Baharnikalogey, mar jayegi.' All these impart to the reclamation of the lost grandeur of Karachi.
Conclusion
To live in the unhomely world, to find its ambivalences and ambiguities enacted in the house of fiction, or its sundering and splitting performed in the work of art, is also to affirm a profound desire for social solidarity (Bhabha, 2012).
Summing up the whole discussion in the light of Bhabha's concept of 'unhomeliness' analysed through the textual analysis proposed by CatherineBelsey, it can be said that the novel in its symbolic depiction of characters; Abdullah and Bosco, the cultural markers portraying the historical perspective of the city and language; all combine together to achieve the motif of reclamation of old glory of Karachi. Two characters are juxtaposed to fulfil the purpose of reclamation of past. Abdullah, personifying Karachi and symbolising the old generation, feels 'unhomely' so he tries to negotiate and retrieve the past Karachi, by 'mulling over the project' of writing the Mythopoetic Legacy of Abdullah Shah Ghazi, in order to feel at home. On the other hand, Bosco, symbolising the new generation, feels 'unhomely' so he tries to negotiate and escape by going abroad in order to feel home. Cultural markers like Quaid e Azam's tomb and the native language used in the novel has contributed to achieve the motif of reclamation of past grandeur of Karachi. | 2020-07-09T09:14:10.048Z | 2020-03-31T00:00:00.000 | {
"year": 2020,
"sha1": "a6ea8ee1636a7def801c07f7a9d736b73f546328",
"oa_license": null,
"oa_url": "https://doi.org/10.35484/pssr.2020(4-i)41",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "6c79df97aceb8f9a77508c594cbcc2f3a529dbc4",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Art"
]
} |
260620885 | pes2o/s2orc | v3-fos-license | Prediction of adverse health outcomes using an electronic frailty index among nonfrail and prefrail community elders
Background Early recognition of older people at risk of undesirable clinical outcomes is vital in preventing future disabling conditions. Here, we report the prognostic performance of an electronic frailty index (eFI) in comparison with traditional tools among nonfrail and prefrail community-dwelling older adults. The study is to investigate the predictive utility of a deficit-accumulation eFI in community elders without overt frailty. Methods Participants aged 65–80 years with a Clinical Frailty Scale of 1–3 points were recruited and followed for 2 years. The eFI score and Fried’s frailty scale were determined by using a semiautomated platform of self-reported questionnaires and objective measurements which yielded cumulative deficits and physical phenotypes from 80 items of risk variables. Kaplan–Meier method and Cox proportional hazards regression were used to analyze the severity of frailty in relation to adverse outcomes of falls, emergency room (ER) visits and hospitalizations during 2 years’ follow-up. Results A total of 427 older adults were evaluated and dichotomized by the median FI score. Two hundred and sixty (60.9%) and 167 (39.1%) elders were stratified into the low- (eFI ≤ 0.075) and the high-risk (eFI > 0.075) groups, respectively. During the follow-up, 77 (47.0%) individuals developed adverse events in the high-risk group, compared with 79 (30.5%) in the low-risk group (x2, p = 0.0006). In multivariable models adjusted for age and sex, the increased risk of all three events combined in the high- vs. low-risk group remained significant (adjusted hazard ratio (aHR) = 3.08, 95% confidence interval (CI): 1.87–5.07). For individual adverse event, the aHRs were 2.20 (CI: 1.44–3.36) for falls; 1.67 (CI: 1.03–2.70) for ER visits; and 2.84 (CI: 1.73–4.67) for hospitalizations. Compared with the traditional tools, the eFI stratification (high- vs. low-risk) showed better predictive performance than either CFS rating (managing well vs. fit to very fit; not discriminative in hospitalizations) or Fried’s scale (prefrail to frail vs. nonfrail; not discriminative in ER visits). Conclusion The eFI system is a useful frailty tool which effectively predicts the risk of adverse healthcare outcomes in nonfrail and/or prefrail older adults over a period of 2 years. Supplementary Information The online version contains supplementary material available at 10.1186/s12877-023-04160-1.
Introduction
Frailty is a multidimensional syndrome characterized by increased vulnerability resulting from age-dependent decline in physiologic reserve and homeostatic regulation [1].Burgeoning studies have shown that frailty is associated with adverse outcomes such as falls [2], hospital (re) admissions, [3,4] disability [5][6][7], and all-cause mortality [8].As world population ageing, it has become a global health burden, with substantial impact for clinical practice and public health [9].Depending on the operational criteria, study populations and socioeconomic levels, the prevalence of frailty varies greatly from 4-49.3% [10,11].Two generally accepted approaches have been used to define frailty, i.e., the rule-based Fried's frailty scale which measures physical phenotypes [12], and the deficit-accumulation frailty index (FI) which quantitates health vulnerability [13].A third class of approach, the judgement-based Clinical Frailty Scale (CFS), was developed as a handier tool to measure fitness and frailty in older adults [14], and has since been associated with outcomes in multiple clinical settings [15].In Rockwood et al's original cohort, elders with CFS 4-7 points were more likely to die and enter an institution than those with CFS 1-3 points.The outcome of individuals graded as 'managing well' to 'fit' (CFS 2-3 points) [16], while not directly compared to the 'very fit'persons (CFS 1 point), could be deduced from Fried's model where subjects with 'intermediate or prefrail status' (1-2 criteria) were found to exhibit worse outcomes than their 'nonfrail' (0 criteria) counterparts [12].
Nonfrail and prefrail individuals represent the vast majority of community-dwelling older adults.Nonfrail elders especially those without prior major health events are generally considered as 'fit to very fit' or 'robust' .However, in a recent systematic review [17] pooling 120,805 nonfrail older adults from 46 observational studies, the incidence rate of prefrailty was much higher than frailty (about 151 new cases of prefrailty per 1000 person-years vs. 43 new cases of frailty).Prefrailty, like frailty syndrome mentioned above, is associated with adverse health outcomes [11,17].Thus, it is imperative to identify susceptible elders so that disabling conditions can be preventive by timely instructions or interventions [18][19][20][21][22][23].
Given the subtleness of health changes in nonfrail and prefrail older adults, we hypothesized that the deficitaccumulation FI approach, which was rooted on the concept of comprehensive geriatric assessment (CGA) [24], may detect health vulnerability and predict adverse outcomes more extensively than traditional rule-based tools [2,[25][26][27].However, the procedure of data acquisition from any given FI is often time-consuming and unwieldy in clinical practice [9,28].As such, assessment by aid of routinely available electronic health record data has been used to form FIs to expedite frailty screening [29][30][31].While these FIs identified susceptible elders with descent predictive validity, the electronic medical records lacked essential elements of physical phenotypes such as grip strength and walking speed, which failed to enable individually tailored preventive actions.Newer digital devices have also been adapted to measure frailty components such as walking speed and gait [32,33], yet most instruments used were stand-alone and not interconnected.Here, we report the usefulness of a semiautomated electronic FI (eFI) system which comprised 80 risk factors of health deficits.The predictive performance of the eFI was analyzed and compared to that achieved with the traditional CFS and Fried's frailty scale in a prospective cohort of nonfrail and prefrail community elders followed over a period of 2 years.
Study design and setting
This prospective cohort study was conducted at the Department of Geriatrics and Gerontology of a tertiary medical center.Community-dwelling older people who received geriatric health examinations were recruited from April 2018 to December 2018.The study protocol was reviewed and approved by the Research Ethics Committee of the National Taiwan University Hospital (No: 201802035RINB).
Participants
Older adults aged 65-80 years with basic literacy skills and a CFS rating of 1-3 points using a validated traditional Chinese version [34] were enrolled and followed for 2 years.Patients with dementia or active cancer and those who were unable to follow measuring instructions were excluded.Individuals with pacemakers or metal implants were excluded to avoid interference using the bioelectrical impedance analysis.Formal written informed consent was obtained from each individual before participating in this study.
Assessment of frailty risk
The severity of frailty was assessed by using an 80-item eFI built in the BabyBot vital data recording system (Netown Corporation, Taipei, Taiwan), which yielded the deficit-accumulation eFI score and the Fried's frailty phenotype.
Deficit-accumulation eFI score
The eFI system used a count of 80 'health deficits' (risk factors) whose selection were in accord with the criteria of construction and ascertained by an expert team of geriatricians listed as authors on this paper.The full list of variables is provided in Supplementary Table.Among these factors, 68 subjective items were obtained by selfreported questionnaires presented on a touchscreen tablet interface, while 12 objective items were measured using medical devices approved by Taiwan Ministry of Health and Welfare, including a three-in-one machine (OMRON Automatic Blood Pressure Monitor; BabyBot Pulse Oximeter) for vital signs, a bioelectrical impedance analyzer (Tanita BC-418, Tokyo, Japan) for body composition and body mass index, a hand-held dynamometer with digital output for hand grip strength, a Gaitspeedometers with infrared sensors devices for walking speed, as well as a cushion-type pressure sensor for timed up and go test and 5 times sit-to-stand test.The assessment of each participant was conducted under the guidance of a trained assistant.The reply to questionnaires and the results of measurements were uploaded to the internet without manual recording (Supplementary Figure).The eFI system assigns equal weights to all 80 included items.One point was given for each abnormal deficit, and the cumulative deficit (range 0-80) was translated into the eFI score by calculating the sum of all deficits, divided by the total 80 risk factors included in the system (eFI score = 0-1).Given the nature of risk factors included, and to simplify the interpretation of results, we chose a cutpoint based on the median eFI score.Individuals with an eFI score ≤ the median value were defined as 'low risk' , while those with an eFI score > the median were classified as 'high risk' .To confirm the predictive accuracy of the categorical classification, a separate Cox regression model was constructed, treating the eFI score from 0 to 1 as continuous variable.
Fried's frailty phenotype
The rule-based frailty phenotype is defined according to the following 5 criteria: unintentional weight loss (5 kg in the past year), self-reported exhaustion, weakness (grip strength), slow walking speed, and low physical activity.The result of each criteria was extracted during the same round of assessment in obtaining the eFI score.Individuals with a frailty score of 0, 1-2 and > 2 are classified into nonfrail, prefrail and frail groups, respectively.
Outcome measures
During the 2-year follow-up, any incident adverse events including falls, emergency room (ER) visits and unexpected hospitalizations, were collected every three months through telephone interviews.A fall episode was defined by the WHO as 'an event which results in a person coming to rest inadvertently on the ground or floor or other lower level.'
Statistical analyses
SAS version 9.4 (SAS Institute, Cary, NC) was used for analyses.T tests (for normally distributed continuous variables), Mann-Whitney U tests (for nonnormally distributed continuous variables) and chi-square tests (for categorical variables) were used for between-group comparisons.Curves for the probability of falls, ER visits and hospitalizations within 24 months were created with the Kaplan-Meier method and compared using the logrank test.Multivariate analysis adjusted for age and sex was performed using a Cox proportional hazards regression model to project the impact of frailty risk (high-vs.low-risk, as categorical variable) or eFI score (0-1, as continuous variable) on adverse health outcomes of falls, ER visits and hospitalizations.A p value < 0.05 was considered significant.
Approaches of utilizing the 80-item eFI system
A total of 427 older adults with a CFS of 1-3 points underwent evaluation by the eFI system on the index date.For pragmatic reasons, we divided participants into groups of one or more (2 to 7) people.Ten persons were assessed in one-by-one approach, 34 in groups of 2 (17 groups), 84 in groups of 3 (28 groups), 112 in groups of 4 (28 groups), 130 in groups of 5 (26 groups), 36 in groups of 6 (6 groups) and 21 in groups of 7 (3 groups).As shown in Table 1, in one-by-one evaluation approach, the assessment was completed in 18.1 min on average.In group evaluation approach, it took an average of 19.6 min for 2 people, 20.7 min for 3 people, 22.8 min for 4 people, 24.8 min for 5 people, 31.9 min for 6 people and 39.7 min for 7 people.The more participants each group contained, the longer operation time it took to complete the whole assessment.That being said, group approach was more time-efficient than one-by-one approach.We found that groups of ≧ 3 persons could save up to 60% of the estimated total time with one-by-one approach.
Baseline characteristics of the participants
The mean age of the participants was 71.3 years, with 197 (46.1%) being men.The median eFI score was 0.075 with an interquartile range of 0.0625.Two hundred and sixty elders were categorized as low-risk (i.e., eFI score ≤ 0.075), while 167 were classified as high-risk (i.e., eFI score > 0.075) Four participants (1 low-risk, 3 high-risk) withdrew voluntarily shortly after the initial assessment due to personal reasons.(Fig. 1).The baseline demographics, clinical characteristics and functional status of the two groups are shown in Table 2.No significant differences in sex, age, marital status or education level were observed.In men but not women, the highrisk group exhibited significantly lower grip strength than the low-risk group (29.3 kg vs. 31.9kg, p = 0.003).
The high-risk group also reported more medical conditions, including hypertension, diabetes mellitus, hyperlipidemia, coronary artery disease, chronic obstructive pulmonary disease, chronic liver disease and urologic disorders.The average eFI score of the high-and the lowrisk groups was 0.11 and 0.05, respectively.Adverse events predicted by the 80-item eFI score and the traditional tools.
As shown in Table 3, the eFI scoring system identified 260 (60.9%) individuals as low-risk.In contrast, CFS graded 83.4% of the participants as very fit (4.0%) Fig. 1 Flowchart of the study to well (79.4%), while Fried's frailty scale reported 281 (65.8%) individuals were robust and 144 (33.7%) were prefrail.Despite the discrepancies in the classification, the trends of frailty risks as revealed by different frailty tools remained statistically significant (Spearman correlation tests, p < 0.0001).It is noteworthy that among the 'relatively healthy' elders, defined by either very fit to fit with CFS or robust with Fried's scale, approximately onethird of them (34.8% and 30.2%, respectively) were stratified as high-risk by the eFI scoring system.Moreover, up to 60.6% of managing-well individuals by CFS rating and 55.6% of prefrail individuals by Fried's scale were stratified as high-risk with the eFI system.These data suggest that the eFI system provided more discriminative evaluation of overall health deficits among individuals with a CFS score of 1-3 points or Fried's scale of 0-2 criteria.
During the 2-year follow-up, 77 of 164 (47.0%) highrisk participants and 79 of 259 (30.5%) low-risk elders experienced either of the three adverse events (x 2 , p = 0.0006).The Kaplan-Meier analysis shows that the event-free survival curves of falls, hospitalizations, and ER visits during the 24-month follow-up were significantly better in the low-risk than in the high-risk group (log-rank test, p < 0.0001 for falls, p = 0.04 for ER visits, p < 0.0001 for hospitalizations) In contrast, the survival curves graded by CFS scores (1,2,3 points) and Fried's scale (0, 1, 2 criteria) were less discriminative compared with that achieved by the eFI stratification (high-vs.lowrisk) (Fig. 2).
Discussion
This prospective observational study demonstrated for the first time that a novel, semiautomated eFI system effectively predicted the risk of adverse health outcomes among a cohort of nonfrail and prefrail community elders followed over a period of 2 years.Our participants were recruited based on CFS rating from very fit (CFS 1 point) to managing well (CFS 3 points), with Fried's phenotype ranging from 0 to 1-2 criteria, and a median eFI score of 0.075 (interquartile range 0.0625), which all fit the operational definitions of nonfrailty and/or prefrailty [9,12,16].The outcome of individuals graded as 'managing Fig. 2 The event-free survival curves of falls, ER visits and hospitalizations between the high-and low-risk groups stratified by the eFI system, as well as subsets classified by CFS rating (1,2,3 points) and Fried's frailty scale (0, 1, 2 criteria) well' to 'fit' (CFS 2-3 points) [16] might be comparable to prefrail elders (Fried's scale 1-2 criteria) who showed intermediate risk of incident falls, worsening mobility or disabled activity of daily living, hospitalization, and death [12].The present study further found that elders with an eFI score > 0.075, i.e., the high-risk group, displayed an increased risk of falls, ER visits and hospital admissions, compared with their low-risk counterparts.More importantly, in multivariable models adjusted for age and sex, the overall predictive performance of the eFI stratification (high-vs.low-risk) was more discriminative than that projected by either CFS rating (CFS 3 vs.CFS 1-2 points) or Fried's scale (prefrail to frail vs. robust).These findings may have implications for practicing physicians in terms of identifying susceptible individuals, deploying preventive actions, and allocating healthcare resources.Indeed, a plethora of studies have shown that multidomain and interdisciplinary primary care interventions can reverse prefrailty to robustness among prefrail older adults [18][19][20][21].
Community-dwelling older adults, whether nonfrail or prefrail, are prone to developing frailty as they get older [17,35].It is also possible that due to reduced physiological reserve or intrinsic capacity, some elders may become frail and enter into disability prematurely following inadvertent adverse events such as falls or hosptalizations.Thus, early detection of susceptible individuals at greater risk of adverse health outcomes is of paramount importance so that frailty progression and its consequential outcomes can be prevented [23].Among the traditional tools, CFS is a handy clinical index and Fried's phenotype is a brief and concise scale, both of which have been used for screening and prediction since their launch.However, CFS is judgement-based requiring experienced physicians to maintain interrater reliability, while Fried's phenotype is rule-based focused mainly on physical domains without referring to cognitive or psychosocial dimensions [36].By comparison, the CGA-based FI model [24] has shown better or non-inferiority discriminative power in risk identification and outcome prediction [2,[25][26][27].But the time-consuming and unwieldy nature of this approach prevents its routine use in daily practice [9,28].
The present study provided an alternative solution to this problem by demonstrating the implementation of a semiautomated eFI system in nonfrail and prefrail community elders.It is noteworthy that approximately 60% of managing-well seniors (CFS 3 points) or prefrail elders by Fried's phenotype (1-2 criteria) were stratified as highrisk, suggesting that the eFI system might be more discriminative than traditional tools in evaluating the health status of community elders.Technically, the most efficient way of utilizing the system was to adopt the 'team' approach.Our data showed that groups of 3-5 individuals could allow a trained assistant to complete a full screening, saving up to 60% of the estimated total time compared with the 'one-by-one' approach.The built-in 80 risk variables also contained elements of Fried's phenotypes (walking speed, grip strength) and the Study of Osteoporotic Fractures index (5 time sit-to-stand) [37], which could be measured during the same round of eFI assessment.This add-on value makes the eFI system a very useful tool to screen for frailty status particularly in 'subhealthy' older adults residing in the community.
This study is unique in several aspects.First, the 80 items of health deficits included in the system could be obtained in real time without any missing data, thus increasing the reliability of its predictive utility.Second, the platform of eFI can be operated in a semiautomated manner by a single person, which greatly reduces the unwieldy nature of calculating FI scores.Third, the eFI system contains all the necessary elements to measure frailty risk defined by other traditional tools such as Fried's frailty scale and SOF, making it a handy tool to screen for frailty in older adults.Last but not least, compared to CFS score and Fried's scale, the eFI system provided more discriminative power in predicting adverse outcomes.This add-on benefit may help the clinicians or medical professionals to deliver more individually tailored preventive actions so that future disability can be averted.
As a new technology-based tool, there are some limitation and uncertainties about the eFI system.First, the self-reported questionnaires as displayed on touchscreen tablet interface might be subject to reporting bias.Nevertheless, self-reported tools such as Kihon Checklist and FRAIL are widely used for frailty screening [38,39], and our participants could complete the current touchscreen survey under the assistance of a trained person, which substantially reduced any inconsistency during the evaluation.Second, some studies proposed cutoffs > 0.25 to indicate frailty, and 0.1 to 0.25 as prefrailty [9,40].In this study, we used the median eFI score of 0.075 to stratify our nonfrail and prefrail participants.Although this may seem arbitrary, we also constructed a separate Cox regression model, treating the eFI score from 0 to 1 as continuous data, and the results remained consistent with that projected by the categorical stratification.Finally, because three outcomes (fall, ER visit and hospitalization) were set to evaluate the predictive utility of the eFI, CFS, and Fried's scale, type 1 error might arise due to multiple testing.To adjust for this potential bias, we conducted Bonferroni correction by multiplying the p-values by a factor of three in Table 4, and the results confirmed that the eFI scores, especially when treated as continuous variable, remained a significant tool in the prediction of all outcomes measured.
Table 1
Time efficiency in utilizing the eFI system eFI, electronic frailty index; SD, standard deviation; NA, not applicable
Table 2
Baseline demographics, clinical characteristics and eFI score of the participants
Table 3
Correlations between different risk levels classified by eFI, CFS, and Fried's scale
Table 4
Hazard ratios for adverse events by different frailtyassessing tools h, hazard ratio; ER, emergency room; CI, confidence interval; CFS, Clinical Frailty Scale, eFI, electronic frailty index.*p < 0.05 | 2023-08-07T13:40:48.281Z | 2023-08-07T00:00:00.000 | {
"year": 2023,
"sha1": "4363b57f348b0c05e88f1618c08c0a1606021c24",
"oa_license": "CCBY",
"oa_url": "https://bmcgeriatr.biomedcentral.com/counter/pdf/10.1186/s12877-023-04160-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9e4e0380412af7170675246f7889726efc3a122c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259852374 | pes2o/s2orc | v3-fos-license | Exploring the alteration of gut microbiota and brain function in gender-specific Parkinson’s disease based on metagenomic sequencing
Background The role of the microbiota-gut-brain axis in Parkinson’s disease (PD) has received increasing attention. Although gender differences are known to an essential role in the epidemiology and clinical course of PD, there are no studies on the sex specificity of the microbiota-gut-brain axis in the development and progression of PD. Methods Fresh fecal samples from 24 PD patients (13 males, 11 females) were collected for metagenomic sequencing. The composition and function of the gut microbiota were analyzed by resting-state functional magnetic resonance imaging (fMRI). Gender-dependent differences in brain ALFF values and their correlation with microbiota were further analyzed. Results The relative abundance of Propionivibrio, Thermosediminibacter, and Flavobacteriaceae_noname was increased in male PD patients. LEfse analysis showed that Verrucomicrobial, Akkermansiaceae, and Akkermansia were dominant in the males. In female patients, the relative abundance of Propionicicella was decreased and Escherichia, Escherichia_coli, and Lachnospiraceae were predominant. The expression of the sesquiterpenoid and triterpenoid biosynthesis pathways was increased in male PD patients and was statistically different from females. Compared to the Male PD patients, female patients showed decreased ALFF values in the left inferior parietal regions, and the relative abundance of Propionivibrio was positively correlated with the regional ALFF values. Conclusion Our study provides novel clinical evidence of the gender-specific relationship between gut microbiota alterations and brain function in PD patients, highlighting the critical role of the microbiota-gut-brain axis in gender differences in PD.
Introduction
Parkinson's disease (PD) is the second most common neurodegenerative disease, trailing only Alzheimer's disease. PD affects middle-aged and elderly individuals, and its key pathological characteristics are the progressive degeneration of nigrostriatal dopaminergic neurons and the formation of Lewy bodies. Epidemiological studies demonstrated that the prevalence of PD in Western countries is as high as 1% over 60, and more than 4% over 80 (Ascherio and Schwarzschild, 2016). The prevalence of PD in China is 1.7% among people over 65 years of age, and it is estimated that the number of Parkinson's disease patients in China will rise from 1.99 million in 2005 to 5 million by 2030 (Li et al., 2019). The clinical symptoms of PD include mostly progressive movement disorders, including bradykinesia, resting tremor, myotonia, and postural balance disorders. PD is also associated with a variety of non-motor symptoms, such as constipation, sleep disorders, and autonomic dysfunction (Ascherio and Schwarzschild, 2016). As the disease progresses, not only the quality of the patient's life deteriorates, but also a huge burden on society and families is created.
Increasing evidence points to significant gender-related differences in the epidemiology and clinical features of PD. Although the incidence of PD is twofold higher in males than in females, the female sex is associated with higher mortality and more rapid disease progression (Baldereschi et al., 2000;Dahodwala et al., 2018). Additionally, female patients differ from males in their response to treatment and self-assessment of the quality of life (Cerri et al., 2019). Therefore, biological sex differences have been attributed as one of the important factors influencing the development of PD disease. In addition to the effects of physiological sex, patients with PD exhibited unique changes in gut microbiota, and alterations in gut microbiota composition might be involved in the pathogenesis of PD. Gut microbiota was thought to be a key mediator of bidirectional communication between the gut and the brain along the gut-brain axis (Quigley, 2017). In a study, Prevotellaceae, Lachnospiraceae, and Faecalibacterium were found to be significantly decreased in PD patients compared to healthy controls, and the investigators speculated that these generals may exert anti-PD effects. In addition, the abundance of Verrucomicrobiaceae, Bifidobacteriaceae, Christensenellaceae, and Ruminococcaceae was observed to decrease in PD patients (Wallen et al., 2020). Notably, Gender has been shown to influence the complexity and diversity of the gut microbiota, and the gut microbiota was gender-specific in the neurobiology of mental disorders (Clarke et al., 2013;Vemuri et al., 2019). However, no studies have yet elucidated whether there are gender differences in gut microbiota in female and male PD patients.
Pathological studies have shown that PD lesions are not limited to the substantia nigra, but gradually extend to multiple brain regions including the limbic system and extensive neocortex. The progressive development of lesions results in the formation and accumulation of Lewy vesicles in local neurons, neuronal necrosis and loss, and subsequent functional abnormalities in the corresponding brain regions (Lees, 2012). Functional changes in the brain of PD patients can be studied by functional magnetic resonance imaging (fMRI). fMRI provides insights into the mechanisms of damage associated with many clinical symptoms of PD, and helps to understand the mechanisms of neuroplasticity involved in the effectiveness of pharmacological and neurorehabilitation treatments. The Amplitude of Low-Frequency Fluctuations (ALFF) measures blood oxygen leveldependent (BOLD) signals in the low-frequency range that can be used to evaluate the regional neural activity and directly observe changes in brain activity. In PD patients, ALFF can be used to assess neuronal activity in several brain regions, including frontal, parietal, temporal, dorsal thalamus, caudate nucleus, and other regions. In addition, ALFF can be used to determine the severity of motor and non-motor symptoms in patients.
Based on the concept of the gut-brain axis, the present study combines the metagenomic analysis of gut microbiota with fMRI results to investigate whether gender differences in gut microbiota are present in PD patients and whether they correlate with brain activity. Taken together, this study will help to better investigate the role of gut microbiota in the pathogenesis and pathological changes of PD disease.
Participants
We recruited 24 age-, weight-, and Hoehn-Yahr (HY) scorematched patients (including 13 males and 11 females) with PD from the Department of Neurology of the First People's Hospital of Huai'an City. PD was diagnosed according to the primary Parkinson's disease diagnostic criteria [MDS 2015 criteria (Postuma et al., 2015)], and patients continued stable-dose PD treatment for the duration of the study. The age range of the PD patients was 52-78 years old. Participant exclusion criteria included: Parkinson's syndrome; chronic gastrointestinal disease; autoimmune disease with gastrointestinal involvement; malignancy; and a history of antibiotic use in the last month. We obtained written informed consent from each patient before sample collection. All patients were assessed on the Body mass index (BMI), UPDRS III (Asakawa et al., 2016), HAMA (Hamilton, 1959), and HAMD (Hamilton, 1967) scales before sample collection. At the same time, we also calculated and counted the L-dopa equivalent dose of each PD patient. The study protocol was approved by the ethics committee of the Affiliated Huai'an No.1 People's Hospital of Nanjing Medical University.
Fecal sample collection
Fresh stool samples were collected from all PD patients early in the morning before medication was administered. One clinical fecal sample per patient was collected. All samples were stored at −80 • C until DNA extraction.
Metagenomic sequencing of the gut microbiota
DNA from 24 fecal samples was extracted using The E.Z.N.A. R Stool DNA Kit (D4015-02, Omega, Inc., USA) according to the manufacturer's instructions. The total DNA was eluted in 50 µl of Elution buffer by a modification of the procedure described by the manufacturer (QIAGEN) and stored at −80 • C until measurement in the PCR by LC-BIO TECHNOLOGIES (HANGZHOU) CO., LTD., Hang Zhou, Zhejiang Province, China. The quality of DNA extraction was determined by agarose gel electrophoresis, while DNA was quantified by UV spectrophotometer. DNA library was constructed by TruSeq Nano DNA LT Library Preparation Kit . DNA was fragmented by dsDNA Fragmentase (NEB, M0348S) by incubating at 37 • C for 30 min. After the libraries passed quality control, high-throughput sequencing was performed with NovaSeq6000, and the sequencing mode was PE150. The raw data obtained from sequencing were subjected to further analysis. First, sequencing adapters were removed from sequencing reads using cutadapt v1.9. Secondly, low-quality reads were trimmed by fqtrim v0.94 using a sliding-window algorithm. Thirdly, reads were aligned to the host genome using bowtie2 v2.2.0 to remove host contamination. Once quality-filtered reads were obtained, they were de novo assembled to construct the metagenome for each sample by IDBA-UD v1.1.1. All coding regions (CDS) of metagenomic contigs were predicted by MetaGeneMark v3.26. CDS sequences of all samples were clustered by CD-HIT v4.6.1 to obtain unigenes. Unigene abundance for a certain sample was estimated by TPM based on the number of aligned reads bybowtie2 v2.2.0. Unigenes were obtained after filtering low abundance expression; Unigenes were compared with NR_mate library to obtain species annotation information using DIAMOND software; Unigenes were compared with GO and KEGG databases for functional annotation. Alpha diversity and beta diversity were determined by the QIIME2, and the pictures were drawn by R (v3.5.2). Liner discriminant analysis (LDA) effect size (LEfse) analyses were performed with the LEfse tool. 1
MRI data acquisition
Magnetic resonance images were acquired by a 32-channel 3.0-T MRI scanner (Philips, ingenia 3.0 CX), The T1-weighted three-dimensional images were obtained with the following parameter settings: repetition time (TR) = 6.6 ms, echo time (TE) = 3.0 ms, thickness = 1.0 mm, flip angle = 8 • , the field of view (FOV) = 240 mm × 240 mm, matrix = 240 × 240; Functional MR images were acquired across 250 scans with a gradient echo EPI sequence: TR = 2,000 ms, TE = 30 ms, and flip angle = 90 • . A total of 33 slices (FOV = 230 mm × 230 mm, matrix = 96 × 94, slice thickness = 3.6 mm, and 250 volumes) aligned along the anterior cingulate and posterior cingulate cortex line were acquired. During the rs-fMRI scans, all participants kept their eyes closed, relaxed, motionless, awake, and of nothing. Routine brain fluid-attenuated inverse recovery (FLAIR) sequence scanning was acquired to exclude other cerebral abnormalities.
Image pre-processing
The functional Resting-State fMRI image preprocessing was performed with the software DPASF. 2 The initial 10-time points were removed to eliminate interference from early detection, Remaining 240-time points were analyzed as described below: slice-timing, realigning for checking head motion (head motion >1.5 mm translation or rotation >1.5 • would be excluded), and co-registered the functional MRI images to participants' 3D-T1 images, then the T1 structural images were segmented into gray matter, white matter and cerebrospinal fluid and then spatially normalized to the Montreal Neurological Institute (MNI) space, the final functional images of normalization matrix were smoothed with a 6 mm (FWHM) Gaussian kernel. Following the filtering of imaging data, linear regression was used to remove spurious variance in head motion, CSF signal, and white matter signal.
The amplitude of low-frequency fluctuations (ALFF) analysis
Firstly, the acquired images were linearly de-trended. After filtering at 0.01-0.08 Hz, the data will be converted into a power spectrum by fast Fourier transform, then calculating the square root of the power spectrum and extracting the ALFF value. The ALFF value of each voxel was divided by the global mean ALFF (mALFF) value to obtain a standardized ALFF value. An independent twosample t-test was used to calculate the ALFF differences between the two groups with the covariate of the head motion parameter. The results were corrected by the Gaussian Random Field theory correction (GRF) (voxel P-value < 0.001, cluster p-value < 0.05). The peak voxel MNI coordinate of the significant voxel was picked as ROI. The software tool-Rest 3 was used to extract the ALFF value of the peak voxel for subsequent statistical analysis.
Statistical analysis
The clinical characteristics of the participants were analyzed using the SPSS 26.0 statistical package (IBM SPSS Statistics). Categorical data were calculated using the chi-square test and quantitative data were calculated using independent samples t-test (two-tailed). p < 0.05 was considered statistically different. Results are shown as mean ± standard error. Correlation analysis was performed using Spearman correlation analysis in SPSS.
Clinical characteristics of PD patients
A total of 24 patients with PD were enrolled in this study and divided into two groups, including 13 males (PD_M) and 11 females (PD_F). There were no statistically significant differences in age, disease duration, weight, Hoehn-Yahr stage, HAMA, HAMD, and Levodopa equivalent dose between the two groups (p > 0.05). However, female patients had higher BMI than males (p = 0.001) ( Table 1).
Sequencing data and gut microbiota diversity
Metagenomic sequencing was performed on 24 fecal samples collected from PD patients. The Venn diagram illustrates the number of unigenes shared between males and females or unique to one gender ( Figure 1A). Specifically, 201,931 unigenes were unique to the PD_M group, and 111596 unigenes were unique to the PD_F group. The total number of unigenes in both groups was 126934. Subsequently, the distribution and composition of the microbial communities in the samples were obtained by species annotation based on sequence information from UniGene. To assess differences in microbiota composition between the two groups, Chao1 (p = 0.28), Shannon (p = 0.78), and Simpson (p = 0.82) indices were used to evaluate the α diversity of gut microbiota (Figures 1B-D), and PCoA and NMDS were used to evaluate the β diversity (Figures 1E, F). However, neither they α nor the β diversity differed significantly between the two groups (p > 0.05).
Alterations of gut microbiota composition
At the phylum level, gut microbiota in both male and female PD patients were dominated by Bacteroidetes, Firmicutes, and Proteobacteria (Figure 2A). However, the abundance of all phyla did not differ significantly between the two groups. The top three dominant bacteria at the genus level were Bacteroides, Prevotella, and Faecalibacterium ( Figure 2B). Next, nine significantly different genera in the two groups were selected by abundance variation analysis (p < 0.05 and | log2(fold_change)| > 1). This analysis demonstrated that the relative abundance of Propionivibrio (p = 0.016), Thermosediminibacter (p = 0.018), Flavobacteriaceae_noname (p = 0.033), Dethiosulfatibacter (p = 0.036), Alsobacter (p = 0.037), Candidatus_soleaferrea (p = 0.038), Halocella (p = 0.045), and Leminorella (p = 0.049) was increased in the male and decreased in the female PD patients. Conversely, the relative abundance of Propionicicella (p = 0.031) was decreased in males and increased in females (Figure 2C). To further identify the significant differences between the two groups, the LEfSe analysis was performed for the seven taxonomic strata according to different comparison groups (LDA > 3.0 and p < 0.05), and species with significant differences were presented 10.3389/fnagi.2023.1148546 by evolutionary branching plots ( Figure 2D) and distribution histograms ( Figure 2E). The results demonstrated that the relative abundance of Verrucomicrobial, AKKermansiaceae, and AKKermanisa was increased in the male PD patients. Conversely, the relative abundance of Escherichia, Escherichia_coli, and Lachnospiraceae was significantly higher in the female PD patients.
Functional characterization of differentially expressed genes (DEGs)
The UniGene differential expression analysis showed that the expression of 4141 unigenes were upregulated and the expression of 3187 unigenes expressions were downregulated in both groups ( Figure 3A). Next, GO enrichment analysis of DEGs was performed to performed (Figure 3B), and significant DEGs were classified into three main categories: biological process (BP), cellular component (CC), and molecular function (MF). The top three significantly upregulated DEGs in the BP category were related to the peptide metabolic process, protein processing, and carbohydrate metabolic process. In the CC category, the upregulated genes were significantly involved in extracellular space, and an integral component of the plasma membrane. For the category of MF, the upregulated DEGs were correlated with the hydrolase, acting on glycosyl bonds, serine-type carboxypeptidase activity, and metallocarboxypeptidase activity. KEGG pathway enrichment analysis showed significant enrichment in the categories of Cellular Process, Environmental Information Processing, Genetic Information Processing, Human Disease, and Metabolism ( Figure 3C). The most significantly enriched pathway was the Metabolic pathways in the Metabolism category. In addition, the upregulation of Sesquiterpenoid and Triterpenoid Biosynthesis in male PD patients was statistically significant (p < 0.05) (Figure 3D).
Relationship between the ALFF values of peak brain region and microbial abundance
The brain ALFF analysis was performed using the fMRI data obtained in 13 male and 11 female PD patients. Compared to the male group, females exhibited decreased ALFF signals in the left angular gyrus of the parietal inferior margin (Parietal_Inf_L), the left superior parietal gyrus and the left post-central gyrus ( Figure 4A). Parietal_Inf_L was selected as the peak brain region and regional ALFF values were extracted for each subject. Spearman correlation analysis between the extracted ALFF values and nine significantly differential gut microbiota ( Figure 4B) demonstrated that the abundance of Propionivibrio was positively correlated with the ALFF values (r = 0.45, p = 0.027).
Discussion
The microbiota-gut-brain axis is considered a complex bidirectional signaling pathway between the gut and the central nervous system, including the autonomic nervous system, immune system, chemotransmitters, and endocrine hormones (Margolis et al., 2021). Recent evidence suggested that the interaction between sex specificity and gut microbiota may play a role in the development of neurodegenerative diseases by influencing the gut-brain axis (Rizzetto et al., 2018;Jaggar et al., 2020). The pathology of intestinal origin of PD was first proposed by Del Tredici and Braak (2008) and was subsequently confirmed by a growing number of human and experimental studies (Lionnet et al., 2018). Gut microbiota can affect the onset and development of PD through multiple mechanisms such as neural pathways, altering host metabolism, modulating peripheral immunity, and influencing drug efficacy (Zhu et al., 2022). While sex hormones, mainly estrogen, were thought to support neuronal antioxidant maintenance of the dopaminergic nervous system (Zarate et al., 2017), thus far, no studies explored sex-specific effects on the onset and progression of PD via the microbiota-gut-brain axis. Based on these premises, the current study further investigated the association between the sex specificity of gut microbiota and correlation with brain function in PD patients.
The results of our study showed no significant difference in gut microbiota diversity between male and female PD patients. Previously, a European study reported a higher α-diversity of gut microbiota in females than in males (Mueller et al., 2006). However, the findings in Asia differ from those in the West, with no significant differences in α-diversity between males and females documented in a Japanese study (Takagi et al., 2019). Additionally, it has been suggested that the association between sex and α-diversity is stronger in young adults than in middle-aged adults, and the most significant changes in gut microbiota diversity occur in early childhood (Yatsunenko et al., 2012). Given that no differences in α-diversity were found between middle-aged people and those in their 70 s (Biagi et al., 2010), we hypothesized that the comparable gut microbiota diversity between male and female PD patients might be related to the average age of the enrolled patients, around 60-70 years. Importantly, our study identified significant differences in the relative abundance of gut microbiota at the genus level between male and female PD patients. We determined that the relative abundance of Propionivibrio, Thermosediminibacter, Flavobacteriaceae_noname, Dethiosulfatibacter, Alsobacter, Candidatus_Soleaferrea, Halocella, and Leminorella was higher in the male than in the female PD patients. Moreover, the LEfse analysis showed that Verrucomicrobial, Akkermansiaceae, and Akkermansia were dominant in the males. Several studies have shown a significant increase in the levels of Verrucomicrobial, Akkermansiaceae, and Akkermansia in PD patients compared to healthy individuals (Bullich et al., 2019;Shen et al., 2021). A study by Lin and coworkers documented an increased abundance of Verrucomicrobia in PD patients that correlated with disease severity, and increased plasma IFN-γ concentrations (Lin et al., 2019). Akkermansiaceae is the second family of warty microflora, and Akkermansia is the only well-known genus within the family. Akkermansia was isolated from the outer mucus layer attached to intestinal epithelial cells and can be involved in the degradation of mucin (Geerlings et al., 2018). Increased abundance of Akkermansia in the stool of PD patients has also been detected in many studies (Keshavarzian et al., 2015;Heintz-Buschart et al., 2018;Lin et al., 2019). It has been suggested that Akkermansia may be involved in the pro-inflammatory process of PD, causing intestinal barrier disruption and abnormal aggregation of α-synuclein in the enteric nervous system (ENS), accelerating the progression of PD (Fujio-Vejar et al., 2017). The present investigation demonstrated that the relative abundance of Propionicicella was decreased in female patients, while Escherichia, Escherichia_coli, and Lachnospiraceae were predominant. The increase in the abundance of Enterobacteriaceae was considered essential in intestinal 10.3389/fnagi.2023.1148546 (Scheperjans et al., 2015). Conversely, Lachnospiraceae is a potentially beneficial family of gut microbiota found in most healthy people (Sorbara et al., 2020). Relationship between the Peak value of ROI analysis and microbial abundance. (A) Differential brain functional areas in PD patients by gender. (B) Spearman analysis was performed for correlation analysis.
Members of Lachnospiraceae produce short-chain fatty acids that play a role in anti-inflammatory reactions and coordinate gastrointestinal nervous system function (Lin et al., 2018;Oliphant et al., 2021). Keshavarzian et al. (2015) showed that the abundance of Lachnospiraceae decreased with the duration of PD.
Interestingly, estrogen has recently been reported to affect the gut microbiota. In a cross-sectional study, urinary estrogen levels correlated with fecal microbiota abundance and α-diversity in men and postmenopausal women, and β-glucuronidase production by certain fecal flora was negatively associated with the level of fecal estrogen (Flores et al., 2012). In our study, baseline clinical characteristics indicated that female patients had a higher BMI than male patients. A Chinese study of subjects with different BMI values showed that higher α-diversity was present underweight patients, but no significant diversity differences were observed among obese, normal weight, and overweight patients (Gao et al., 2018). Obese Chinese individuals was characterized by increased Fusobacterium in men, whereas women exhibited an increased abundance of Bifidobacterium, Coprococcus, and Dialister genera. Although some of the female patients in our study were overweight, the collected data do not prove the notion that changes in gut microbiota in PD patients of different genders are affected by BMI values; a larger sample size is needed to explore the relationship between BMI and intestinal flora female PD patients.
The metagenomic sequencing results also revealed that differentially expressed unigenes were significantly enriched in protein process, serine-type carboxypeptidase activity, and extracellular space. This information provides direction for further mechanistic investigations. PD is considered a classical "protein disease" in which proteins misfold to form fibrillar aggregates rich in β-fold. Additionally, the results of KEGG enrichment analysis suggested that metabolic pathways may play an important mechanistic role in gender-specific differences in the gut microbiota of PD patients, but metabolomics-related studies were not performed here. These studies, utilizing larger sample sizes, are needed to elucidate the mechanisms underlying gender differences in the gut microbiota of PD patients. In addition, we found that the expression of sesquiterpenoid and triterpenoid biosynthesis pathways was increased in male PD patients and was statistically different from female patients. In this regard, the triterpenoids found in Centella asiatica have been shown to have a neuroprotective function (Gray et al., 2018). For example, Centella asiatica and Centella asiatica glycosides found in Centella asiatica have been shown to have neuroprotective effects in stroke models, reducing cytotoxic damage and microglia activation (Chen et al., 2014). Since it is well known that estrogen plays a protective role in female patients with PD, the possibility that gender-specific gut microbiota plays a protective role in the development of PD warrants additional mechanistic studies.
Gender differences are also present in the human brain. Males have larger total brain volume, cortical surface area and sulcal gyrus, a greater white-to-gray matter ratio, and less dense gray matter than females; these differences may be genetically determined. It has been found that hippocampal plasticity receives sex-related alterations in the microbiome (Darch et al., 2021). Experiments in germ-free mice revealed that alterations in microbiota might indeed lead to changes in dendritic signaling integration in hippocampal circuit regions (Luczynski et al., 2016). All these studies imply that sex differences in the gut-brain axis may be modified by the microbiome. The work by Siani et al. (2017) demonstrated that ovariectomy increases dopaminergic loss in female mice, while supplementation with estrogen prevents dopaminergic loss. This result suggested that sex hormone therapy may benefit PD patients through the gut-brain axis. Sex hormone therapy may bring potential efficacy to PD patients through the gut-brain axis. By ALFF analysis, we identified Parietal_Inf_L, a region of difference in brain function between male PD patients and female PD patients. As previously shown, this brain region is associated with AD-related Aβ alterations and is thought to be one of the potentially A toxic oligomer seeds in the brain (Jiang et al., 2015). In addition, we found that the relative abundance of Propionivibrio was positively correlated with the peak value in Parietal_Inf_L. Therefore, we hypothesized that sex-specific changes in intestinal flora abundance might affect brain function in PD patients through the action of the gut-brain axis. Although we have revealed the effects of sex differences on brain and gut microbiota in PD patients, the mechanisms of these interactions and their mutual relationship remain unclear. More experiments are needed to elucidate the role of sex differences and their interactions with gut microbiota in the onset and progression of PD disease.
Some several strengths and limitations of this study need to be acknowledged. Its main advantage is the first-ever exploration of the changes in gut microbiota in PD patients that takes into account patients' gender and combine it with metagenomics sequencing technology. In addition, for the first time, the fMRI technique was utilized to advance the understanding of the relationship between microbiota changes and brain function. However, our study has some weaknesses: (i) It was designed as a pilot study with a small sample size. (ii) Healthy controls were not included, although previous studies have shown differences in gut microbiotas between PD patients and normal subjects. (iii) The evaluation of the relationship between altered gut microbiota and brain function was limited to association analysis, and more basic research is needed to identify the underlying mechanisms.
Conclusion
The present study identified the differences in the composition and function of the gut microbiota between male and female PD patients by metagenomic sequencing. fMRI results confirmed that gender differences exist in specific brain regions in PD patients and may be associated with altered gut microbiota. In conclusion, we provide a novel direction to investigate the mechanism of action of the microbiota-gut-brain axis in male and female PD patients.
Data availability statement
The sequence data presented in this study are deposited in the NCBI Sequence Read Archive (SRA) database, accession number PRJNA985875.
Ethics statement
The studies involving human participants were reviewed and approved by the Ethics Committee of the Affiliated Huai'an No.1 People's Hospital with Nanjing Medical University. The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article. | 2023-07-14T15:04:34.821Z | 2023-07-12T00:00:00.000 | {
"year": 2023,
"sha1": "8829315852d7565c866ebcdd9d1ca4917f5f816e",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnagi.2023.1148546/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f764e4264f4088bf1d081b085e2c26bb00b616e9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251457784 | pes2o/s2orc | v3-fos-license | Fluorine in metal-catalyzed asymmetric transformations: the lightest halogen causing a massive effect
This review aims at providing an overview of the most significant applications of fluorine-containing ligands reported in the literature starting from 2001 until mid-2021. The ligands are classified according to the nature of the donor atoms involved. This review highlights both metal–ligand interactions and the structure–reactivity relationships resulting from the presence of the fluorine atom or fluorine-containing substituents on chiral catalysts.
Introduction
Fluoroorganic molecules have received widespread interest in recent research in view of the fact that their synthesis is virtually missing from any biological processes. Being the 13th most abundant element in the lithosphere, uorine is mostly present in water-insoluble minerals, i.e., uorspar (CaF 2 ), cryolite (Na 3 AlF 6 ), and uoroapatite (Ca 5 (PO 4 ) 3 F), which limits its uptake into living organisms. 1 Also, the nucleophilicity of uoride is diminished by its high hydration energy, and therefore, this anion is inadequate for any nucleophilic substitution reactions in aqueous media. As a result, only a dozen of naturally occurring uorometabolites, i.e., uoroacetic acid, u-uorinated fatty acids, (2R,3R)-2-uorocitric acid, (2S,3S,4R,5R)nucleocidin, and (2S,3S)-4-uorothreonine, have been identi-ed so far. 2 Since then, such scarcity has been compensated for by man-made uorine-containing pharmaceuticals, of which many have been welcomed as blockbuster drugs on the market. 3 Molecular conformation, membrane permeability, and metabolic stability are all properties affected by uorine substitution, but the impact on the pharmacodynamics and pharmacokinetics of a lead remains quite unpredictable. 4 The Samuel Lauzon was born in Québec, Canada. He joined the research group of Professor Thierry Ollevier as an undergraduate intern student in 2015. He obtained his MSc in 2018 working on asymmetric iron catalysis. He received his PhD in organic chemistry at Université Laval (Québec, Canada) under the supervision of Professor Thierry Ollevier in 2022. His work involved the development of designer 2,2 0 -bipyridinediol ligands for applications in metal-catalyzed asymmetric transformations. He is currently interested in ligand design, crystallization of chiral catalysts, transition state models, and environmentally benign catalysis.
Thierry Ollevier was born in Brussels and obtained his Licence (1991) and PhD (1997) at the University of Namur (Belgium) and was a post doctorate fellow at the Université catholique de Louvain (Belgium), under István E. Markó (1997), a NATO and BAEF postdoctorate fellow at Stanford University under Barry M. Trost (1998Trost ( -2000, and then a post doctorate fellow at the Université de Montréal under André B. Charette (2000)(2001). Aer an Assistant Professor appointment (2001) at Université Laval (Québec, Canada), he became Associate (2006) and is currently Full Professor. Current research in his group aims at designing novel catalysts, developing catalytic reactions and applying these methods to chemical synthesis. He is active in the areas of asymmetric catalysis, iron catalysis, diazo chemistry, uorine chemistry, and ow chemistry. He has served as a member of the Advisory Board of SynOpen since 2019, as an Associate Editor of RSC Advances since 2015 and was admitted as a Fellow of the Royal Society of Chemistry (2016). specic behaviour of uorinated compounds arises from the short, strong, and highly polarized C-F bond that electrostatically pairs with the neighbouring atoms, bonds, and lone pairs. 5 Instead of being targeted only for biological activity purposes, various chiral organouorine compounds were employed as catalysts in asymmetric transformations. 6 Among the known strategies for designing enhanced stereodiscriminating catalysts, 7 the conversion of known catalytic systems into F-containing ones unlocked new interesting features ensuing from the uorine effect. Chiral uorinated catalysts benet from (i) electronically and (ii) sterically customized properties and (iii) being used in uorous biphasic systems (FBS). Since an increased reactivity in acid catalysts is oen a synonym of electronic deciency, Pauling (c P ), empirical (c e ), Huheey (c H ), and Jaffé (c J ) electronegativities values are provided for a selection of atoms and groups (Table 1). 8 Fluorine is the most electronegative element on the Pauling scale (c P ¼ 3.98) and is most frequently used as an electronically impoverishing substituent in ligands. The CF 3 group is also electronically decient, much more than other C-based substituents, and its c value is similar to the ones of the CN and NO 2 groups, known as strongly electron withdrawing groups (EWG). A set of steric parameters, e.g., A values (ÀDG), 9 Ta (ÀE s ), 10 Charton (n), 11 and biphenyl rotational interference values (I X-H ), 12 and Boltzmann-weighted Sterimol parameters (wB 1 , wL, and wB 5 ), 13 eases the comparison of bulkiness between selected atoms and functional groups ( Table 2). Fluorine is a small element (van der Waals radius ¼ 1.47 A vs. 1.20 A for H vs. 1.52 A for O) 14 and generally causes minimal steric perturbation upon H to F exchange, whereas the substitution of C-H by C-F in a methyl group successively leads to an increase in size (Me < CH 2 F < CHF 2 < CF 3 ). Thus, uorine is a powerful tool to increase the steric hindranceeven more in polyuoroalkyl groupsrendering the CF 3 group as a bulky substituent in the general order Me < i Pr $ CF 3 ( Ph $ t Bu. The CF 3 group is a key component for ne-tuning simultaneously the steric and electronic properties, which is undeniably of great interest for ligand design. Following the "like dissolves like" principle, a chiral ligand bearing sufficiently long per-uoroalkyl chains, i.e., ponytails (R F ), acquires a strong affinity for the uorous phase, also called "uorophilicity". 15 This engineered technology involves the temperature-dependent miscibility of uorous-organic solvents leading to partition of two phases at lower temperature. The uorous ligand (or catalyst) can be selectively separated from the reactant/product mixture and recovered using various experimental methods.
The synergistic participation of the steric, electronic, and physical properties brought up by the chiral uorinated ligand may modulate the stereoselective event of a reaction positively vs. "uorine-free transition states (TS)". This review highlights both metal-ligand interactions and structure-reactivity relationship related to the presence of the uorine atom or uorinecontaining substituents on chiral catalysts. The article focusses on the molecular architecture of the ligands rather than on the type of reaction they were employed in. The selection of chiral uoroorganic ligands are classied according to O-, N,O-, N-, P, N-, P-and C-binding modes with metals. The advantages of using catalytic systems involving uorine for both the reactivity and the stereochemical outcomes are highlighted, compared, )-1-3 and (S a )-4 were obtained by the phosphorylation of uorinated 1,1 0 -bi-2-naphthol (BINOL)-type ligands and were described as promising candidates for enantioselective applications (Fig. 1). As an example, the uorination reaction of b-ketoesters 5-8 using the 3 : 1 (R a )-1/ScCl 3 complex afforded the corresponding a-uorinated products 9-12 with 78-84% ee (Scheme 1). 16 The importance of the eight uorine atoms was demonstrated based on the low 11% ee obtained for (+)-9 using the Sc III salt of the non-uorinated CPA. On the other hand, (R a )-2 17 and (R a )-3 18 were respectively employed with Cu I salts in the synthesis of chiral polyuoro N-containing compounds, albeit with modest ees.
In another application, cycloadducts (S,R,R)-15 and endo-(R,S,S,S)-16, respectively obtained from the hetero-Diels-Alder (HDA) and the Diels-Alder (DA) reactions between dienophile 13 and cyclopentadiene 14, were synthesized with excellent yields and stereoselectivities using the binary (S a )-4/In III catalytic system (Scheme 2). 19 In most cases, a higher chemoselectivity was noted for the HDA cycloadduct vs. the DA one. The stereoselective induction was explained by the important p, p interaction, arising from an electrostatic pairing between one electronically decient pentauorophenyl ring and diene 14, together with highly sensitive ortho positions occupied by F atoms.
2.1.2 Carboxylates and alkoxides. Known as valuable catalysts in the chemistry of diazo compounds, chiral Rh II tetrakis(carboxylates) are rather expensive to be easily used in industrial applications. The recyclable chiral uorous Rh II complex (S)-18 was then targeted in the cyclopropanation reaction of diazoester 19 and styrene 20 (Scheme 3). 20 Whereas it was less selective in terms of ee than (S)-17, the peruoroalkyl chain allowed an efficient recovery of the catalyst using both liquid and solid uorous phase extraction strategies. Slight improvements of the chiral induction were obtained in the insertion reaction of 19 into the C-H bond of cyclohexane using (S)-18 under either homogeneous or heterogeneous conditions. Various enantioselective organic transformations were performed using more sophisticated CF 3 -containing BINOL-based chiral catalysts (Fig. 2). Indeed, the synergetic activation using self-assembled bifunctional chiral catalysis was shown as an efficient strategy in the hydrophosphonylation reaction of aldehydes. 21 The multiple reactive sites of catalyst (R a )-22, generated in situ in the presence of Ti(O i Pr) 4 , CF 3 -arylsubstituted BINOL, and (-)-cinchonidine, allowed cooperative interactions between steric and electronic properties, as postulated in the transition state model of the reaction. Also, triuoromethylated BINOLates showed excellent chiral induction in the desymmetrization of C s -symmetric phosphaferrocenes via ring-closing metathesis using Mo VI -catalysts (R a )-23 and (R a )-24. 22 2.1.3 Diols. Since hydrobenzoin 25 is cheap and readily available in its R,R and S,S enantiomeric forms, chiral Ti IV complexes have employed the 1,2-diphenylethane-1,2-diol scaffold in the enantioselective oxidation reaction of 4-methylthioanisole 30 using cumyl hydrogen peroxide (CHP) as the oxidant (Scheme 4, conditions A). 23 The incorporation of OMe and CF 3 groups onto the chiral backbone of (R,R)-26 and (R,R)-27 led to electronically modied Ti IV catalysts of lower capacity for chiral induction. As observed experimentally, a reversal of the asymmetric induction was observed using the 4-CF 3 -C 6 H 4 group (26% ee for (R)-36) vs. the unsubstituted one (80% ee for (S)-36), resulting in optically active sulfoxides of opposite signs. As postulated, competing mechanisms, arising from divergent binding modes with the Ti IV centre, led to a decreased purity of 31 as the S enantiomer and to a complete reversal of the sense of chiral induction in favour of (R)-31. The Ti IV -catalyzed asymmetric oxidation of aromatic suldes was also demonstrated using a BINOL ligand comprising uorine atoms at the 5, 5 0 , 6, 6 0 , 7, 7 0 , 8, and 8 0 positions (Scheme 4, conditions B). 24 The oxidation of 30 by the (R a )-28/Ti IV system afforded (R)-31 with virtually no enantioselectivity. Again, a reversal of the chiral induction was observed when (R a )-29 was employed and sulfoxide 31 was obtained as the S enantiomer in 80% ee. In the case of F 8 BINOL (R a )-29, the tenfold increase in acidity of the hydroxyl groups (pK a ¼ 9.28 vs. 10.28 for (R a )-28) and the more positive oxidation potential (E 2 ¼ 2.07 V vs. 1.47 V for (R a )-28) are believed to result in a more congurationally stable chiral environment of the Ti IV -based catalyst.
Being recognized along with BINOL as "privileged ligands", 25 a,a,a 0 ,a 0 -tetraaryl-1,3-dioxolane-4,5-dimethanol (TADDOL) ligands have inspired the design of ligands 32-35, bearing a,a,a 0 -triperuoroalkyl and a,a,a 0 ,a 0 -tetraperuorobutyl chains (Fig. 3). 26 Being successfully employed in the Ti( i PrO) 4 -catalyzed methylation of various aldehydes, ligands (R,S,S)-32-34 afforded similar levels of chiral induction (ca. 95% ee) in the benchmark reaction using benzaldehyde. Notably, shortening the uorinated ponytails had no effect on the efficiency of the catalyst (C 4 F 9 vs. C 8 F 17 ), but the recyclability of the ligand was reduced; hence the use of an expensive uorous solvent was therefore needed for extraction. Interestingly, only the tetrakis(peruoroalkyl) analogue (R,R)-35 was obtained from (R,S,S)-34, whereas the incorporation of a fourth peruoroalkyl chain failed for diol (R,S,S)-32 and (R,S,S)-33 due to higher steric congestion. However, the (R,R)-35/Ti IV catalyst was noticed to be inactive in the tested catalytic application.
BINOLs (R a )-36 and (S a )-37 were subjected to comparison studies in the addition of diphenylzinc to (E)-cinnamaldehyde 38 (Scheme 5). 27 As can be seen in (S a )-37, the uorine atoms increase the Lewis acidity of the Zn II chelated metal centre. The higher enantioselectivity observed for (R)-39 was then attributed to a better activation of the aldehyde by a more active catalyst, which favours the ligand-controlled pathway vs. the uncatalyzed background reaction.
Enantioselective applications in uorous biphasic systems (FBS) were described for peruorobutyl, -hexyl, and -octyl BINOL derivatives, i.e., (R a )-40-42, 28 (S a )-43-46, 28c,29 and (R a ,S,S)-47 (Fig. 4). 30 As an example, the addition of diethylzinc to a solution containing aldehydes 48-52 (0.20 equiv. each, total ¼ 1.0 equiv.) in a,a,a-triuorotoluene was performed using (R a )-40 (Scheme 6). 28a All ve aldehydes simultaneously reacted with the nucleophile under the optimal reaction conditions. Elution of the reaction mixture with acetonitrile, using uorous reverse phase (FRP) silica gel chromatography, afforded alcohols (R)-53-57 with ees up to 84%. The recovery of (R a )-40 from the uorous phase was achieved using tetradecauorohexane (FC-72) as the eluent. Also, a recycling protocol using a uorous biphasic system based on (peruoro)methyldecalin was suitable to recover (S a )-46, which was reused in up to nine subsequent runs and afforded (S)-1-phenylpropane-1-ol without loss of chiral induction. 28c The protonation of the Sm III -enolate, obtained from 2-methoxy-2-phenylcyclohexanone, afforded the corresponding enantioenriched ketone with similar yields and ees by using both (R a ,S,S)-47 and its non-uorinated analogue. 30 The main advantage of (R a ,S,S)-47 is its easy recyclability via a simple ltration. The enantioselective uorescent recognition of trans-1,2-diaminocyclohexane was performed using a uorous BINOL in an FC-72/MeOH biphasic system. 31 The success encountered with the axially chiral BINOL-type ligand fostered the development of designer diol ligands, i.e., (R a ,R,R)-58, 32 (R a ,R,R)-59, 32a-c and (R a ,R,R)-60, 32a,b,33 bearing benzylic a-C n F n+1 -alcohols, whose acidities are closely similar to the ones of phenolic compounds. The Ti IV -catalyzed addition of Et 2 Zn to benzaldehyde 48 was tackled to evaluate the increase of the steric bulk arising from the various peruoroalkyl chains (Scheme 7). 32a The highest enantiomeric excess of (S)-53 was obtained using (R a ,R,R)-60, where the C 7 F 15 chains were inducing maximum steric hindrance. Moreover, the (R a ,R,R)-60/ Ti IV catalytic system was protably applied in uorous catalysis, and the chiral ligand was recovered quantitatively over seven cycles using an optimized binary solvent system of FC-72/ CH 2 Cl 2 (2 : 1). 33b Overall, the increase in acidity of hydroxylic groups is a direct effect from the incorporation of uorine atoms onto chiral ligands. Similarly, electronically decient O-binding sites allow better interactions with the metal involved giving a more compact TS, resulting in an increased asymmetric induction and even in a complete reversal of it. Peruoroalkyl chains have provided chiral ligands the ability to be recycled through various uorous phase extraction strategies.
N,O-Based binding modes
2.2.1 Diols. A sterically hindered catalytic site was built from 2,2 0 -bipyridinediol (S,S)-61 to take advantage of the stereoelectronic properties of CF 3 groups at the a,a 0 -positions of the OH moieties. A selection of aromatic, heteroaromatic, and aliphatic alcohols (R)-53, (R)-55, and 70-77 were obtained in good to excellent yields (up to 99%) and enantioselectivities (up to 95% ee) using a Zn II -mediated reaction (Scheme 8). 34 As observed from X-ray diffraction (XRD) analysis, the hexacoordinated (R,R)-61/Zn(OTf) 2 complex led to the hypothesis of coherent transition state models of distinct congurationseither as exo-trans or endo-transgiving the major and minor enantiomers, respectively. Interestingly, (S)-ibuprofen was employed for the resolution of the a-CF 3 -alcohols, and 2,2 0bipydiol 61 was synthesized in both enantiomeric forms with excellent stereoselectivities (97% de and >99% ee for R,R; >99.5% de and >99.5% ee for S,S).
Chiral Schiff's base ligands (S)-78-83, containing an a-CF 3alcohol at a C-stereocentre, were used in the addition of Scheme 7 Ti IV -Catalyzed ethylation reaction using a,a 0 -C n F n+1 -diol ligands.
diethylzinc to benzaldehyde 48 (Scheme 9). 35 Ligands (S)-79-81, substituted at the ortho position with a Me or a t Bu group, failed to increase the chiral induction. Only para-substituted (S)-82 afforded (R)-53 with similar ee to (S)-78. A non-linear relationship with a minimum enantiomeric amplication, together with high resolution mass spectrometry (HRMS) analysis, suggested that the dimeric [(S)-78] 2 /Zn II complex is the active catalyst. The C-stereocentre of Schiff's base ligands (S)-78-83 was constructed by the enantioselective reduction of the onitrophenyl-a-CF 3 -ketone using the (R)-CBS oxazaborolidine reagent. Surprisingly, the non-uorinated analogues of these Schiff's base ligands have not been described in the literature so far.
2.2.2 b-Amino alcohols. Fluorinated b-amino alcohol ligands 84-95 have been used in various asymmetric transformations involving the alkylation of aldehydes (Scheme 10). In the Et 2 Zn alkylation reaction of benzaldehyde, 36 only the organozinc catalyst prepared from (S)-88 led to a maximum enantioselectivity, notwithstanding the presence of the Me or i Pr substituents on the ligand at the a-position of the hydroxyl group giving (S)-89 and (S)-90 with bulkier quaternary carbon stereocentres (Scheme 10, le). Correlation studies on the catalyst loading (2-50 mol%) have demonstrated a strong dependence between the increased amount of (S)-88-90 and the ee observed on (R)-53, whereas no dependency was determined for the non-uorinated analogues (S)-94 and (S)-95; the superior degree of aggregation of CF 3 -containing b-amino alcohol ligands, particularly for the (S)-88/Zn II catalyst, strongly participated in the mechanism to reach higher chiral induction of alcohol (R)-53. 36a A wider library of b-amino a-CF 3 -alcohol ligands were screened in the Reformatsky reaction of PhCHO (Scheme 10, right). 37 The ees obtained with ligands (S)-84 and (S)-85, containing a primary or a secondary amine, were much lower than the ones provided using tertiary amine-based ligands. Furthermore, (S)-86 possessing a N,N-dimethyl-amino group led to the highest 81% ee of (R)-96 in comparison with the other ligands having diisopropylamine ((S)-87), piperidine ((S)-88), and carbazole ((S)-91) motifs at the b-position. As shown on (S,S)-92 and (S,S)-93, the benzene ring bearing bamino a-CF 3 -alcohols tethered at the 1,2 and 1,3 positions were considerably less efficient than (S)-86. The aggregation effect of such triuoromethylated ligands with Zn II species was also found benecial for the enantioselectivity ((S)-88 vs. (S)-94).
Zn II -Catalyzed alkylation of aldehydes was also performed using substituted N-methyl-L-prolinol bearing a,a-aryl groups substituted by a F atom 38 or a C 8 F 17 chain 39 at the para position. Through Ag I catalysis, the 1,3-dipolar cycloaddition reaction was disclosed using 2,3-dihydroimidazo[1,2-a]pyridine-based DHIPOH ligands, substituted by a CF 3 group at the C6 position of the quinoline backbone, but they only showed modest chiral inductions among the tested ligands. 40 The Cu II -catalyzed aldol reaction of b,g-unsaturated aketoesters with coumarin-3-ones was investigated using various prolinol derivatives (S)-97-102(Scheme 11). 41 Supported by density-functional theory (DFT) calculations, the nucleophilic attack of coumarin-3-one was hypothesized to occur from the Siface of the a-ketoester in order to prepare predominantly the S,R diastereoisomer of 103 with good yields (67-93%) and low to excellent enantioselectivities (56-94% ee). According to the experimental results, the values of the observed ees were decreased when electron withdrawing groups were present on the aromatic rings.
The Brønsted base/Lewis acid cooperative catalysis was highlighted by the formation of dinuclear Zn II catalysts in the presence of bis(prolinol)phenols (S,S)-104-106 (Fig. 5). 42 Unfortunately, lower stereoselectivities were obtained using (S,S)-104-106 vs. the non-uorinated ligands in both the alkynylation 43 ligands were mixed with Mn III , 46 Co II , 47 or Ir II salts, 48 and these salen/metal catalysts showed high levels of stereoinduction in various catalytic applications using FBS.
The efficiency of Fe III (salen) catalysts was compared in the asymmetric epoxidation reaction of 127, from which a higher level of chiral induction was mainly attributed to the uorophilic effect (Scheme 13). 49 According to crystallographic experiments, two distinct structures of the catalyst (R,R)-125 and (R,R)-126, bearing respectively C 4 H 9 and C 4 F 9 chains, were presented. Catalyst (R,R)-126, because of intramolecular stacking of the C 4 F 9 chains arising in it, was described to adopt a unique umbrella conformation, which was more efficient to afford enantiomerically enriched (S)-128 than the usual C 2symmetrical stepped conformation of metal(salen) complexes.
Structural insights into the C 2 -symmetric complexes prepared from the combination of uorous diamino-dialkoxy ligands and Ti IV and Zr IV cations were obtained from various spectroscopic and crystallographic analyses. Chiral dirhodium catalyst P-cis-anti-139 was employed in the [2 + 1]-cycloaddition reaction of ethyl a-diazoacetate 135 and hept-1-yne 140 (Scheme 15). 52 Other mono-and tris(amidate)bridged Rh II 2 complexes were prepared from the N-tri-ylimidazolidinone precursor (R,R)-138, but all led to (S)-141 with lower enantioselectivities (<95% ee). The trans-vicinal CF 3 groups on the imidazolidinone backbone brought up interesting features to (R,R)-138, i.e., high sterically hindered and electron decient chelating N atoms.
To sum up, ligands bearing a triuoromethyl substituent for inducing chirality are scarce, but the potential of using the CF 3 group to provide an important steric hindrance at the a position of alcohols and amines has been disclosed in various studies. Furthermore, the benecial synergy of both the steric and the electronic properties for optimal enantioselectivity was demonstrated in the comparison study performed using CF 3containing ligand 88 and its Me-and i Pr-analogs in the ethylation reaction of benzaldehyde. The uorophilic effect is also responsible for signicant conformational changes to classical salen complexes leading to a new conformation adopted by the F-tagged ligand-metal complex. dichlorophenyl arms, was used (Scheme 16). 54 Interestingly, uorine substitution at the 2,6-and 2,3,4,5,6-positions of the aryl group led to lower enantioselectivities of (1R,2R,4R)-149, but higher endo/exo diastereoselectivities were observed (142 < 145 < 146).
200.
A 3,5-ditriuoromethylated phenyl ring was incorporated into a Cinchona-alkaloid-based sulfonamide ligand, which was employed as the chiral source in the Cu II -catalyzed radical oxytriuoromethylation of alkenyl oximes. 71 The Simmons-Smith cyclopropanation of allylic alcohols was performed using in situ generated Zn II complexes from uorous disulfonamide ligands, where both ligands were easily recovered by uorous solid phase extraction. The allenylation of terminal alkynes and diazo compounds, generated in situ by MnO 2 -assisted oxidation of their corresponding hydrazone in a continuous ow system, was disclosed as being highly enantioselective (89-97% ee) under Cu I catalysis (Scheme 24). 74 A library of pyridine bis(imidazoline) (PyBIM) ligands bearing N-aryl substituents, i.e., 4-CF 3 -C 6 H 4 , 3,5-(CF 3 ) 2 -C 6 H 3 , and 4-CF 3 -3,5-F 2 -C 6 H 2 , was screened, but only (S,S)-210, comprising 4-SF 5 -C 6 H 4 groups, showed an optimal chiral induction. Noteworthily, these chiral ligands were synthesized from the treatment of a pyridine-2,6-diimidoyl chloride precursor, derived from (S)-tert-leucinol, with the corresponding uorinated aniline, such as 4-(pentauorothio)aniline in the case of (S,S)-210. As postulated, enantioenriched allene (R)-213 was synthesized via the concerted Cu-C bond insertion of the (S,S)-210/Cu I acetylide intermediate into the diazo compound 211, which approaches with the H near the t Bu group of the ligand to induce minimal steric strains. The scope was further extended to propargylamides derived from (S)ibuprofen, penicillin G, (R,R)-atorvastatin, and others, and excellent stereoselectivities were highlighted in the disclosed method.
Scheme 24 Cu I -Catalyzed allenylation reaction and the postulated TS.
Scheme 25 Mg II -Catalyzed dynamic kinetic asymmetric [3 + 2] cycloaddition reaction. evaluate the scope of the reaction. An opposite trend was observed in the Cu I -catalyzed allenylation reaction presented above, where the optimization studies performed on model substrates, using p-R-i Pr-PyBOX ligands instead of t Bu-PyBOX, led to the corresponding allene with 47% ee (R ¼ CF 3 ), 68% ee (R ¼ H), and 70% ee (R ¼ OMe). 74 Chiral pyridine(oxazoline) (PyOX) ligands bearing a CF 3substituted C5 position were tested, but virtually no conversion was noted in the Pd II -catalyzed dihydroxylation reaction of catechol and trans-1-phenyl-1,3-butadiene. 76 Other examples of mono-oxazoline tethered CF 3 -based ligands were presented as amido-oxazolinate/Zn II and sulfoxide-oxazoline/Pd II complexes. 77 Similarly, uorous bis(oxazoline) (BOX) ligands were synthesized and used with various metals to attain solubilization of the obtained chiral catalysts in uorous solvents (Fig. 10). Starting from simple BOX derivatives, the incorporation of peruoroalkyl substituents, comprising C 8 F 17 and C 10 F 21 chains, on the methylene bridge afforded (S,S)-219-221 and (R,R)-222 and (R,R)-223. High ees were obtained using these chiral ligands in the Pd II -catalyzed allylic alkylation, 78 Cu Icatalyzed allylic oxidation 78b and cyclopropanation, 79 and Cu IIcatalyzed ene 80 reactions. Used in these reactions, (S,S)-224 and (S,S)-225 both showed a favoured complexation with [Pd(h 3 -C 3 H 5 )Cl] 2 to reach high chiral inductions, whereas low reactivity was obtained with Cu(OTf)$0.5C 6 H 6 . 78b To facilitate the recycling of uorinated BOX ligands, up to four peruorooctyl chains were introduced on (S,S)-226, which was synthesized from (S)-tyrosine, to increase its uorine content up to 59.3%. 81 Mono-peruoroalkyl bridged BOX ligands (S,S)-227 82 and (S,S)-228 78b were essentially designed to be used in common organic solvents for optimum catalytic activity, but to be recovered via a uorous solid-phase extraction. Moving the peruoroalkyl chains closer to the centre of chirality had no tremendous inuence on the obtained ees using catalysts prepared from (S,S)-229 together with Pd II or Cu I salts. 81 As part of the ligand screening, the indane-based bis-(S,R)-230/Cu II catalyst afforded tert-butyl a-uoroester with a modest 61% ee via the uorination of diazoester 19. 83 The synthetic utility of (S,S)-231 was demonstrated in the synthesis of enantioenriched compounds bearing 3-hydroxy-2-oxindole and quinazolinone scaffolds using Cu II or Sc III catalysis. 82c,d Importantly, the great ability of the triazole ring for coordination with copper salts was offset by the peuoroalkyl group effect on azabis(oxazoline) ligand (S,S)-232 or its F 51 N 9 -tripodal analogue. As a result, high enantioenrichments were afforded in the benzoylation, Friedel-Cras alkylation, and Henry reactions. 84 The strategy of preventing the immobilization of BOX chiral ligands onto poly(ethyleneglycol) (PEG) materials was further explored using (S,S)-233-236 (Fig. 11). Two substitution motifs comprising peruoroalkyl chains, A 85 or B, 86 were included in the structure of the BOX ligands. The uorinated moieties, which were separated from the coordination sites by an appropriate spacer to reduce any undesired interactions, allowed excellent recovery and recycling of the ligands via practical procedures.
All things considered, the availability of many uorinated aldehydes gives the access to a larger diversity of C 2 -symmetric chiral diamine and diimine ligands. The variety of functionalization patterns brought by introducing uorine at every carbon of the aromatic ring has facilitated the ne-tuning of the Lewis acidity of a catalyst through electronically modied properties. Chiral ligands bearing the SF 5 group have been limited due to the scarcity of the availability of the building blocks, and therefore, the pyridine bis(imidazoline) ligand is considered as a major breakthrough. Importantly, not only the recycling of chiral uorous catalyts remains a key objective using various strategies, but the incorporation of diverse uorous chains also led to ligand design in uorous biphasic systems. 87 The simple reaction conditions, involving copper iodide and 1,2-dimethylethylenediamine (DMEDA), afforded (S)-242-245 with good yields, notwithstanding the steric and the electronic properties of the substrate. Noteworthily, a considerably increased catalytic activity of the Pd II catalysts was observed using (S)-243 and (S)-245, which was not observed with the CF 3substitution pattern only on the phenyl rings of (S)-244. The great utility of (S)-245 was highlighted in the Pd II -catalyzed intramolecular decarboxylative allylic alkylation reaction, where highly enantioenriched cyclohexanone derivatives bearing allcarbon quaternary stereocentres at the a-position were obtained with excellent yields. 87,88 Furthermore, the (S)-245/Pd IIcatalyzed asymmetric alkylation reaction was described as an important key step in the synthetic route of (+)-elatol, a spirobicyclic natural product belonging to the family of chamigrene. 88c Favourable complexation with palladium salts was achieved using strong p-acceptor bis(peruoroalkyl)phosphinooxazoline (FOX) ligands for the alkylation of monosubstituted allyl esters. 89 The synthesis of naturally occurring and non-natural iso-avones, members of the avonoid class, was also targeted using (S)-245 through a Pd 0 -catalyzed decarboxylative protonation reaction. 90 According to the postulated TS, both t Bu and MeO groups pointing upwardsfrom the ligand and the substrate, respectivelyinduce a high sterically hindered environment and lead the protonation to occur preferentially via the Si-face (Scheme 27). 90a As a result, isoavone (R)-246 was obtained with excellent yield and enantioselectivity.
2.4.2 Diphosphines. Fluorinated (S,S)-DACH-derived P 2 N 2tetradentate ligand (S,S)-258 was employed in the Ru II -catalyzed cyclopropanation reaction of a-methyl styrene 259 with ethyl adiazoacetate 135 (Scheme 30). 95 The presence of 4-CF 3 -C 6 H 4 groups was benecial for the synthesis of cyclopropane 260 with good 70% de cis , 86% ee cis , and 34% ee trans , whereas the non-uorinated analogue afforded 260 with low 52% de cis , 23% ee cis , and 18% ee trans . When using styrene 20 and 1-octene as substrates, the [RuCl(OEt 2 )((S,S)-258)]PF 6 catalyst was also highly diastereoselective for the corresponding cis cyclopropanes. Again, CF 3 -substituted aryl groups were incorporated into a (S,S)-DPEN-based PNNP ligand, of which chiral Fe II complexes were found to be highly electronically decient, but inactive in the asymmetric transfer hydrogenation of acetophenone. 96 2.4.3 Axially chiral monophosphines. The important Fcontaining atropisomeric 1,1 0 -biphenyl architecture, i.e., 4,4 0 ,6,6 0 -tetrakis-triuoromethyl-biphenyl-2,2 0 -diamine (TF-BIPHAM) (S a )/(R a )-261, was obtained as an enantiopure material via the resolution of (S a ,S)-and (R a ,S)-10-camphornyl-based disulfonamide diastereoisomers and was employed in the synthesis of chiral amine-phosphine ligands (Fig. 12). The rst generation of C 2 -symmetric N,N 0 -PR 2 -TF-BIPHAM ligands (S a )-262-265, comprising diaryl-and dialkylphosphinyl groups, exhibited excellent asymmetric induction in the Rh I -catalyzed hydrogenation of enamides. 97 Catalytic applications towards the synthesis of enantioenriched saturated heterocycles via a highly efficient 1,3-dipolar cycloaddition reaction, using Cu I (ref. 98) or Ag I (ref. 99) catalysis, were developed using the second generation of ligands. Indeed, the scope of these ligands was extended to include mono-N-phospanyl (TF-BIPHAMPhos) derivatives (S a )-266-269. When using the (S a )-268/Ag I catalytic system, pyrrolidine endo-(R,R,R)-272 was afforded in excellent yields and stereoselectivities via the 1,3-dipolar cycloaddition of in situ generated azomethine ylides, as shown using imine 271, to vinyl sulfone 270 through the more accessible Si face of the imine (Scheme 31). 99b The Cu I -catalyzed three-component alkynylation reaction from 276, 277, and rac-278, followed by the Au I -catalyzed dehydrative cyclization reaction of alkynediol 279 into 280, was Scheme 30 Asymmetric cyclopropanation reaction catalyzed by a CF 3 -containing PNNP/Ru II complex. Scheme 32 Two-step synthesis of 2-aminoalkyl furan via the alkynylation/cyclization reaction sequence. described as highly enantioselective using phosphino(imidazoline) (StackPHIM) (R a ,R,R)-274 (Scheme 32). 100 Since the imidazole analogue (S a )-273 (StackPhos ligands) 101 and (R,R)-275 (having no axial chirality) both led to 2-aminoalkyl furan 280 (R or S) with lower enantiomeric enrichments, the complementary between the stereocentres and the chiral axis to reach higher ee was demonstrated. Further ne-tuning of the 274/Cu I catalyst was highlighted by the combination of the (R,R)-DPEN scaffold with the R a or the S a atropisomer, resulting in an increased chiral induction of 280 from 82% ee (R) to 94% ee (S). Noteworthily, atropisomerism of congurationally stable P,N-ligands 273 and 274 arises from p,p-stacking interactions between the naphthyl and C 6 F 5 moieties. Being determined experimentally at 50 C, rotational energy barriers (DG ‡ ) of 26.8 kcal mol À1 (R a into S a ) and 27.5 kcal mol À1 (S a into R a ) have proven that both atropisomers of 274 could be synthesized, separated and successfully employed as chiral ligands in metal catalysis. However, the absence of uorine atoms considerably lowered the DG ‡ values of R a -S a (or S a -R a ) interconversion, and epimerization of 281 occurred easily even at room temperature.
2.4.4 Ferrocenes. The popular 3,5-bis(triuoromethyl) phenyl scaffold was incorporated into chiral ferrocenyl-derived P,N-containing ligands (Fig. 13). Indeed, imine-, amine-, and oxazoline-based phosphine ligands 282-284 were successfully used in the Pd II -catalyzed allylic alkylation, 102 the Rh II -catalyzed hydrogenation, 103 and the Cu I -catalyzed 1,3-dipolar cycloaddition reactions. 104 In the last case, the divergent exo/endo selectivities observed from the experimental results were rationalized through computational studies. Interestingly, the postulated TS models suggested that two different chelation modes of the substrate were arising from the electron-decient Ar F substituents of (S p ,S)-284 vs. the electron-rich phenyl rings of its non-uorinated analogue. Moreover, closely related ferrocenyl-based bis(peruoroalkyl)phosphino(oxazoline) ligands were designed in a series of bulky t Bu, Ph and Bn substituents at the C-stereocentre. 89,105 In brief, the widespread use of 4-CF 3 and 3,5-(CF 3 ) 2 motifs on the aromatic scaffold of the ligands was highlighted in this section. Importantly, the C 6 F 5 group has been demonstrated as highly efficient when used in axially chiral phosphines, whereas its electrostatic pairing with the p system of the naphthyl moiety has induced a rotational energy barrier allowing atropisomerism. Also, the use of o,o 0 -CF 3 within the TF-BIPHAM architecture was proved valuable in this strategy.
2.5 P-Based binding modes 2.5.1 Planar chiral diphosphines. Electron-poor diphosphine ligands were described for JosiPhos-and WalPhos-type ferrocenes 285-289 and (R p ,R)-290-292, respectively (Fig. 14). Chiral ligands (S p ,R)-285, 106 (R p ,S)-286, 106a,107 and (S p ,R)-287 106b were tested in the Ir I -or Rh III -catalyzed hydrogenation and the Pd II /Cu I -co-catalyzed Heck/Sonogashira asymmetric reactions, but only high chiral inductions were obtained by using the (R p ,S)-286/Pd II catalyst. An application of uorinated JosiPhos ligands in ionic liquids was demonstrated using (S p ,R)-288, i.e., the imidazolium-based analogue of (S p ,R)-286, in combination with [Rh(norbornadiene) 2 ]BF 4 . 107b The asymmetric hydrogenation reaction of methyl acetamidoacrylate, run under biphasic tert-butyl methyl ether/[bmim]BF 4 conditions, afforded the corresponding product with 99% ee using either (S p ,R)-286 or (S p ,R)-288. More importantly, the ionic tag on (S p ,R)-288 allows better recyclability of the uorinated catalyst in the chosen cosolvent system. Chiral thiourea-derived diphosphine ligand (R p ,S)-289 was used in the Rh I -catalyzed hydrogenation reactions, 108 where synergetic dual catalysis was induced by both the P 2 ligated Lewis acid and the Brønsted acid. Control experiments revealed an important gain of ee when a stronger hydrogen bonding was provided by the thiourea moiety. WalPhos-type diphosphines (R p ,R)-290-292 were also described as a promising class of ligands in Rh I , Ru II , and Cu I catalysis. 109 Other uorinated WalPhos-type diphosphines ligands (R p ,R)-293-296 were designed to incorporate various electronic substituents on the aromatic backbone, not directly at the P atom. Such ne-tuning of the chiral ligand was studied in the Ru II -catalyzed hydrogenation of ethyl 3-oxopentanoate 298 into 299 (Scheme 33). 109b Unfortunately, no signicant improvement of the chiral induction, arising from the electronic modications of (R p ,R)-293-296, was demonstrated in comparison with the results obtained using ligands (R p ,R)-290 and (R p ,R)-297 as references. Noteworthily, conformational structures of chiral Ru II complexes, obtained from (R p ,R)-293, (R p ,R)-296, and (R p ,R)-297, were investigated through XRD analysis.
Planar chirality was further exploited in the hydroxycarbonylation reaction of styrene 20 using PhanePhos ligands (S p )-300-303, all being available either in their S p or R p enantiomeric forms (Scheme 34). Chiral dinuclear Pd II catalysts made from uorinated ligands (S p )-302 and (S p )-303 afforded an increased selectivity for the branched regioisomer (S)-304 vs. the selectivity obtained using the electron-rich PhanePhos ligands (S p )-300 and (S p )-301. Moreover, the highest ee was achieved using the (S p )-303/Pd II 2 catalytic system. 110 2.5.2 Axially chiral diphosphines. A Pd II -catalyzed P-C coupling reaction, 111 between enantiomerically enriched phosphanes and aryl iodides, inspired the synthesis of a wide library of F-containing atropisomeric 2,2 0 -bis(diphenylphosphino)-6,6 0dimethoxy-1,1 0 -biphenyl (MeO-BIPHEP) ligands 306-312 (Fig. 15). The development of polyuorinated MeO-BIPHEP derivatives, comprising F 12 -((R a )-306, 112 (S a )-307, 113 (R a )-308), 113,114 The family of axially chiral F-containing bisphosphine ligands was extended to achieve a priority objective, where the ne-tuning of the dihedral angle between the two aromatic rings was targeted (Fig. 16). Ligand design targeting structural variations arising from steric or electronic interactions, for getting optimum stereoselectivities, were extensively explored. Accordingly, all ligands were classied based on the 1, selected classes of ligands, the narrowest dihedral angle was attributed to DiuorPhos-type ligands (q $ 67 ), whereas BINAP derivatives (q $ 86 ) were located at the other end of the steric scale. 119a,b Overall, all electronically impoverished ligands, with specic steric proles, showed excellent chiral inductions when complexed with the appropriate Lewis acid in various asymmetric reactions.
Strategies towards the recycling of the chiral ligand have encouraged the derivatization of BINAPs to incorporate per-uoroalkyl chains within their skeleton (Fig. 17). Fluorine substitution upon the naphthyl-backbone was then chosen for the synthesis of (R a )-325. The asymmetric Heck reaction of 2,3dihydrofuran was successfully performed in the benzene/FC-72 system using (R a )-325/Pd II . 28b, 124 The Ru II -catalyzed hydrogenation reaction of dimethyl itaconate was described using heavier uorinated BINAP (S a )-326, which was immobilized on uorous silica gel. 125 Excellent retention of the uorous ligand within the silica pores, via noncovalent interactions with the C 8 F 17 chains, permitted its recycling without the use of conventional biphasic extraction methods. Similar to (R a )-325, F-containing BINAPs (R a )-327 and (R a )-328 were developed for their great ability as "light" uorous ligands to be extracted from the other organic compounds via simple FRP column chromatography. 126 Introducing the peruoroalkyl chain at the P atom, as highlighted by (R a )-329, afforded poor ees in three metal-catalyzed reactions potentially due to the proximity of the uorous tails to the activating site. 127 The relatively low uorine content of (R a )-329 (51.5%) failed to give satisfactory chiral inductions in FBS. However, (R a )-329 was separated from the reaction mixture quickly using liquid-liquid extraction by peruorocarbons.
Research towards the development of greener synthetic methods focused on the replacement of commonly used organic solvents by less hazardous and more environmentally benign alternatives, e.g., supercritical carbon dioxide (scCO 2 ). The Rh I -catalyzed hydroformylation reaction of monosubstituted alkenes 20, 331-333 was performed in supercritical uids using phosphine-phosphite BINAPHOS (R a ,S a )-330 (Scheme 35). 128 The aldehydes 334-337 were obtained with good regioselectivities and enantioselectivities using the (R a ,S a )-330/ Rh I catalytic system. Considered as CO 2 -philic, the peruoralkyl chains permitted sufficient solubility of the ligand to be used under homogeneous reaction conditions. Moreover, C 4 F 9 , C 6 F 13 , and C 8 F 17 ponytails were incorporated into the 1,1 0binaphthyl core to generate [R F (CH 2 ) 3 ]-BINAPHOS analogues. 129 2.5.3 Phosphoramidites. A wide library of monodentate phosphoramidite ligands was designed for numerous metalcatalyzed asymmetric reactions (Fig. 18). 130 Following the longarm approach, substituted phenyl rings were incorporated at the 3,3 0 -positions of the binaphthol skeleton in order to enhance chiral inductions. Indeed, the space surrounding the ligated metal centre was considerably restricted by bulky 3,5-(CF 3 ) 2 and 4-NO 2 aromatic rings, as observed in (S a )-342/(R a )-343 and (S a )-344/(R a )-345, respectively. An additional netuning of the chiral ligands was possible at the amine moiety, where electron-decient benzyl substituents ((S a )-344/(R a )-345) or hindered piperidine ((R a )-343) were generally more benecial as observed from the obtained ees. A set of (R,R)-TADDOLderived phosphoramidite ligands were screened in the Pd II / Cu I -catalyzed alkynylation reaction, but only the aryl substituents bearing bulky TMS groups and one electron withdrawing F atom afforded the optimal chiral induction. 131 (R)-BINOL-based phosphite ligands, generated from 3,5bis(triuoromethyl)phenol and 2,2,2-triuoroethanol, were considered as promising ligands in the synthesis of Rh I complexes and their use in enantioselective catalysis in ionic liquids. 132 2.5.4 Other P ligands. Fluorinated monophosphine, stereogenic phosphine, and sugar-derived phophinite ligands have been complexed with various noble metals (Fig. 19). The (10-1-10) copolymer poly(quinoxaline-2,3-diyl)phosphine (PQXphos) (S,S)-(R)-346, having P helicity, was employed as a screened ligand in the asymmetric Pd 0 -catalyzed Suzuki-Miyaura coupling reaction, giving only a moderate chiral induction. 133 Interestingly, the sense of the chirality (R a or S a ) at the active site of 346 would be induced, according to the structure models, 133b by the helicity adopted by the polymer either in the righthanded (P) or the le-handed (M) helix geometry, respectively. Another monodentate (1R,3R,4S)-menthyl-based phosphine ligand, bearing two C 6 F 13 and C 8 F 17 ponytails, was reported to give uorous chiral catalysts when mixed with Ir I and Rh I salts. 134 Chiral at P atoms, diphosphine ligand (S,S)-347, belonging to the class of 1,1 0 -bis(diphenylphosphino)ferrocenes (dppf), was designed to modulate the stereoselective event in the Pd II -catalyzed nucleophilic substitution of allylic acetates. 135 When rac-114 and 115 were used as substrates, the combination of the bulky 1-naphthyl substituent with the electronically decient 4-F-C 6 H 4 unit afforded a slightly lower enantioselectivity (61% ee of (S)-116) than the unuorinated one (68% ee of (R)-116) and its electron donating analogue (4-OMe-C 6 H 4 ; 69% ee of (S)-116). A Pt II -catalyzed alkylation of linked secondary phosphines (HRP-PRH) in the presence of various benzyl bromides, comprising F-and CF 3 -containing ones, led to P-stereogenic diphosphines with low stereoselectivities. 136 The hydrogenation of a variety dehydroamino acids was reported using chiral phosphinite/Rh I catalysts made from various carbohydrate scaffolds. 137 Being highly dependent on the P-aryl substituent, the level of chiral induction was demonstrated to be considerably higher when using electron-rich bis(3,5dimethylphenyl) groups vs. electron-poor ones, such as C2,C3bis-(di-4-uorophenyl)phosphinite ligand (2R,3S)-348, derived from phenyl b-D-glucopyranoside. Furthermore, a (R,R)-DIOPlike 4-(triuoromethyl)phenylboronate diphosphine ligand was synthesized and mixed with Rh I , Pd II , and Pt II salts to generate heterobimetallic complexes. 138 In general, the 3,5-(CF 3 ) 2 -C 6 H 3 substituent has been widely used in P-based chiral ligands. Various uorous ponytails were incorporated into BINAP ligands to give "heavy" and "light" uorous analogues to be used in distinct synthetic applications. Major advancements were demonstrated using CO 2 -philic BINAPHOS ligands bearing peruoroalkyl chains, in the enantioselective hydroformylation reaction performed in supercritical carbon dioxide. Noteworthily, this section has detailed numerous mono-and diphosphines incorporating a large range of structurally diverse uorine containing groups.
P,O-Based binding modes
Being O-alkylated with peruoroalkyl chains at the very last step of the synthesis, 2-(diphenylphosphino)-2 0 -alkoxy-1,1 0binaphthyl (MOP) (R a )-349 was used, together with [Pd(h 3 -C 3 H 5 ) Cl] 2 , in the asymmetric alkylation reaction of b-dicarbonyl derivatives with 1,3-diphenyl-2-propenyl rac-114 (Scheme 36). 127,139 The corresponding alkylated products (R)-116 and 353-355 were obtained in moderate to excellent yields, and good ees were obtained. Noteworthily, (R a )-349 was completely extracted from the reaction mixture using n-peruorooctane, whereas the catalytic activity of the recycled Pd II complex was lost when used in subsequent reactions. As revealed by XRD and DFT studies, strong Au I -p interactions with the electron-decient F-arene give an increased stability to the catalyst, by limiting the number of possible rotamers, therefore locating the chiral environment in the optimal orientation. Bulkier substituents at the amine moiety ((S)-PhMeCH vs. i Pr) further enhanced steric interactions around the Au I centre and thus the obtained enantioselectivity. N-heterocyclic carbene (NHC)-Cu I catalysts (S,S)-361-363 were employed in the asymmetric allylic arylation (AAAr) of aliphatic allylic bromides 364-366 with PhMgBr (Scheme 38). 141 Overall, the substitution by sterically hindered and electron-decient aryl groups onto chiral NHC-Cu I complexes, e.g., (S,S)-362 and (S,S)-363, was found benecial to obtain a higher regioselectivity towards the g-regioisomers 367-369 than (S,S)-361. When using the allylic bromide 364, the best enantioselectivity was afforded by using (S,S)-361, but sterically and electronically ne-tuned catalysts considerably improved the gselectivity (367 : 370 up to 92 : 8). Excellent regioselectivity for (R)-368 was obtained particularly with (S,S)-362, whereas excellent enantioselectivities were rather observed when using (S,S)-361 and (S,S)-363. The NHC-Cu I -catalyzed AAAr of 366 led to aproduct 372 preferentially, and only the F-containing catalyst (S,S)-363 afforded (R)-369 with the best regio-and enantioselectivity when using a sterically hindered t Bu-substrate.
2.7.2 Dienes. The uorination reaction of allylic trichloroacetimidates rac-374 was performed via a dynamic kinetic asymmetric transformation (DKAT) mechanism using the Ir I pre-catalyst [(S,S)-373] 2 (Scheme 39). 142 Chelated by the uorinated (S,S)-bicyclo[3.3.0]octadiene ligand, the Ir I centre generated, via ionization of the substrate, the lowest energetically p-allyl intermediate of the computed diastereomeric TSs. Accordingly, the most substituted carbon undergoes nucleophilic attack of the uoride from the outer-sphere, and (R)-375 was obtained with excellent regio-and enantioselectivities.
Rh I -or Ir I -catalyzed highly enantioselective organic transformations, i.e., arylation, 143 conjugate addition, 144 and cyclization reactions, 145 were demonstrated using C 2 -symmetric tetrauorobenzobarrelene (t) ligands 376-384 (Fig. 20). The bicyclic [2.2.2]octatriene skeleton was obtained in both R,R and S,S enantiomeric forms via the resolution of the racemic mixture by high pressure chiral liquid chromatography. Once the desired arms attached, the electronically decient diene ligands 376-384 strongly chelate metal cations, and high sterically hindered environments are then created in the upper right and lower le quadrants. 146
Conclusions
The introduction of the uorine atom has revealed a positive modulation of stereoselective events using metal catalysis. An optimal balance between an ideal substrate activation and an inherent stability of the catalyst can be established by using these electron-decient ligands. The examples presented herein demonstrate that the uorine atom ne-tunes both electronic and steric properties of the chiral catalyst. The high availability of F-containing aromatic compounds, particularly bearing the 3,5-(CF 3 ) 2 -C 6 H 3 and the 4-CF 3 -C 6 H 4 motifs, favours their incorporation into desired specic positions. Generally, their high electron withdrawing ability was exploited for reaching increased acidity of the chelated Lewis acid. Also, the chiral environments were oen better dened by the participation of the uorinated aromatic ring through electrostatic pairing, both with the metal centre in the inner sphere and the substrate orientation in the outer sphere. The use of the uorine atom and polyuorinated groups as a steric bulk remains underdeveloped in asymmetric catalysis. Unexpected high chiral inductions, which are scarcely reported in the literature, are likely to arise from the intrinsic steric properties of bulky pol-yuorinated substituents. Besides stereoelectronic reasons, the presence of uorine in chiral ligands is also valuable from a green chemistry perspective. The uorophilic effect, which is observed in the presence of peruoroalkyl chains, has indeed opened the eld of asymmetric catalysis into uorous biphasic systems. Due to easy extraction procedures, many advantages were highlighted by the recycling of the chiral catalyst over multiple runs without the loss of chiral induction over time. Not only is the uorine atom considered as a highly multifunctional tool for ligand design, but also its judicious employment is of paramount importance for metal-catalyzed asymmetric transformations.
Author contributions
The manuscript was written through contributions of all authors. All authors have given approval to the nal version of the manuscript.
Conflicts of interest
There are no conicts to declare. | 2022-08-10T15:19:22.773Z | 2022-08-08T00:00:00.000 | {
"year": 2022,
"sha1": "62b5339f5ecf7df2cc92315c98926aa868107f39",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2022/sc/d2sc01096h",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1bbf849c870e8d6a9e64d0c2ced7ddf2087a10fd",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255209027 | pes2o/s2orc | v3-fos-license | Smart Grids as product-service systems in the framework of energy 5.0 - a state-of-the-art review
Grids
INTRODUCTION
Climate change has been an open challenge in recent decades, creating pressure on governments, companies, and the international community to adopt cleaner energy strategies and improve the energy efficiency of their systems and processes/operations [1] .Production and manufacturing are among the most energy-intensive activities in modern society and by extension, their energy consumption has been subjected to extensive research over the last few years because of increasing public awareness of key environmental issues, including the greenhouse effect and global warming, the strict legislation about permitted emissions, and rising energy costs.As a result, enterprises are moving toward efficient manufacturing [2] .Furthermore, energy efficiency has been recognized on a global scale as a major policy priority to achieve a carbon-neutral society.More specifically, energy efficiency represents one of the cornerstones for several initiatives, including the EU energy and climate policy [3] , the United States policy called "Getting to Zero: A U.S. Climate Agenda" [4] , and China's path toward a carbon-neutral society by 2060 [5] .
In light of the recent advances in Industry 4.0 [6] , the upcoming Industry 5.0 [7] , and Society 5.0 [8] , engineers are now focusing on the design, development, and implementation of strategies to enhance their productivity and remain competitive while also ensuring that the environmental impact of their activities remains as low as possible.Among the possible solutions for energy and waste management, business models have also been transformed by following the servitization paradigm to provide models such as product-service systems (PSSs) and industrial product-service systems (IPSSs), which promote selling of services rather than tangible goods [9] .According to recent reports [10] , an increase in global electricity demand has been noted that is estimated to hit a 40% increase by 2040.From the latest European Union (EU) market analysis, it has become evident that under current circumstances, the costs of electric energy production and distribution are now a critical issue [11] .Consequently, energy suppliers are faced with the challenge of producing even more electrical energy to meet market demand, compensate for the production cost of this electrical power, and follow new environmental initiatives for greener and more sustainable power production as well.The solution to this challenge lies in the design and development of frameworks to support power generation from renewable and distributed energy resources, which by extension should also be integrated into the outdated, inflexible, and overstressed centralized electrical grids.Essentially, it has become apparent from the most pertinent literature that energy distribution, beyond the existing environmental challenges and the concerns raised by communities, should be democratized.Consequently, energy distribution in modern SGs is accompanied by a constant exchange of information between the energy suppliers and energy consumers.Therefore, with the proposal and integration of PSSs in SGs, new opportunities for creation of suitable communication channels between the clients and the producers will be established.The resulting communication among the stakeholders will allow the volatility of demand, which remains an ongoing challenge, to be tackled more efficiently [11] .Furthermore, with constant tracking of energy demand vs. energy production vs. the production resources (e.g., fossil fuels, renewable sources), energy producers will be more capable of utilizing the potential of their renewable sources fully while minimizing the environmental footprint of their operations.Another important challenge is addressing the stability of the networks and preventing power outages, which in many cases are related to increased and uncontrolled demand from the clients.Similarly, the decentralization model of the SG offers additional benefits to society as a whole since new opportunities can be created for energy companies, while their competitiveness is maintained.In addition, this model means that energy consumers can also play a more active role in energy production.More specifically, rather than the classic consumption model, consumers can contribute to the SG by generating electrical power (via utilization of individual devices) that can be stored and then delivered to meet demand in peak energy consumption periods.Therefore, by using digital technologies such as blockchain, distributed ledgers can be implemented to maintain a complete record of the energy transactions, set up virtual contracts between clients and producers, and provide a space for implementation of a client reward system.The generation of more efficient production schedules that focus on rebalancing energy supply and demand, driven by the ever-increasing need to reduce costs, is thus an imperative.
Ultimately, in an attempt to address the challenges mentioned and the literature gaps identified in the preceding paragraphs, this work presents and discusses the latest developments in the field of PSSs (including IPSSs) and the servitization of energy distribution that takes advantage of the cutting-edge digital technologies that were introduced in the Industry 4.0 framework.The remainder of this work is structured as follows.In Section 2, the most pertinent and relevant literature in the SG field and the accompanying technologies and techniques are investigated.Then, in Section 3, technical details are provided through presentation of frameworks that were mainly developed in the industrial domain but are expandable to the SG concept.In Section 4 the key research challenges for the near future are summarized, possible solutions are discussed, and a conceptual solution framework is presented.Finally, in Section 5, conclusions about the research are presented, and aspects of future work are discussed.
Review methodology
A bibliometric analysis was conducted to examine the existing bibliographic material and identify the primary scientific directions for this research area.The aim was to provide a thorough understanding of the research problems and perform an in-depth analysis of the distinctive aspects of development of the scientific research in this field.The sequence for the bibliometric analysis is presented in Table 1.
The initial search returned a total of 181 scientific literature articles.These articles comprised 51 journal articles, 92 conference papers, 10 book chapters, two books, and 26 conference reviews.In addition, with regard to the relevant topics, the majority of these publications fall into the categories of engineering, energy, computer science, mathematics, and environmental science.
Next, the results dataset was converted into the comma-separated values (CSV) format for further processing.VOSviewer software was used in an effort to visualize the results and analyze their bibliometric form as presented in Figure 1.Specifically, VOSviewer provides the functionality required to create a keyword map based on shared networks, and it can thus create maps with multiple items, along with publication maps, country maps, journal maps based on networks (co-citations), and maps with multiple publications.Less relevant keywords can be removed, and the number of keywords that is used can be adapted by the users.In summary, the functionalities of the VOSviewer software extend to the support of data mining, mapping, and grouping of articles retrieved from scientific databases.Topic mapping is essential for bibliometric research [12] .Additionally, the associated energy costs are rising globally.As a result, there is growing pressure to address both the volume and the type of energy consumption across all sectors by implementing new and creative solutions [13] .New electricity sources such as wind and solar energy have now been demonstrated to be mature and dependable technologies, but there are still related difficulties.The next step will be to tailor these new forms of energy generation to the users' consumption patterns.Because industrial users account for more than 40% of the global total energy consumption, there is a significant opportunity to increase energy efficiency by taking advantage of important trends in industry [14] .
In Figure 2, the correlation between industrial revolutions and energy is illustrated.More specifically, when mechanical production began to replace manual labor in the late 18th century, the first major industrial revolution occurred.Chronologically, the second industrial revolution took place a century later, following the widespread electrification of industrial processes.Subsequently, electrical grids began to be developed .
on a global scale.In the third industrial revolution, which began in the middle of the 20th century, process automation and computers were introduced to enable further optimization of the production process.The fourth industrial revolution, which is also known as Industry 4.0, is currently using new smart and connected systems to boost the flexibility and overall productivity of industry.By extension, the interconnectedness of machinery, larger systems, and devices both within and between industrial sites and users has led to increased manufacturing intelligence.The sustainable energy transition and Industry 4.0 thus share important characteristics that can be interconnected to pursue a sustainable energy transition toward the realization of Industry 5.0 [16] .
Some fundamental guidelines for incorporation of Industry 4.0 tools that will enable Internet connectivity and usage in operational and industrial processes to aid in the implementation of Energy 4.0 are summarized in the following points [17] : • Interoperability: Through use of the Internet and its services, interoperability represents the connection of various components and human resources.
• Virtualization: Information from sensors, simulation models, back-office systems, and other resources can be made into virtual copies.
• Real-time capability: The capacity to gather data, conduct analysis, and reach decisions immediately with near-zero latency.
• Modularity: The ability to replace, add, or remove components as needed.
Consequently, through the implementation of Industry 4.0 and Energy 4.0, engineers have laid the foundations for the realization of Energy 5.0.More precisely, developments falling within the framework of Energy 4.0 have also been defined in the literature as Smart Grid 1.0 (i.e., the first-generation SG).Similarly, the imminent changes and advances that are being made towards Energy 5.0 are defined as the second-generation SG, i.e.Smart Grid 2.0, and will be discussed in the following paragraphs.Energy 4.0 is a concept that was introduced within the framework of Industry 4.0, which refers to the digitization of the energy sector [18] .A detailed definition of Energy 4.0 is necessary here, because Energy 5.0 is heavily correlated with this concept as it essentially represents the next stage in its evolution [19] .It should be stressed here that in the energy sector, key areas including energy generation, distribution, storage, and marketing, among other aspects, are all included.The reasons behind these changes include the fact that the physical world is changing at an unprecedented speed, with significant issues to be addressed in intermittent renewables, nuclear power, and new transmission and distribution grids, among other areas.Additionally, the commercial energy world is changing (e.g., unbundling, trading, and new products).Finally, another significant reason is the constantly growing collection and flow of big data sets.
Cyber-physical systems (CPSs) are essential elements of Energy 4.0 [20] .CPSs are composed of physical entities and are controlled or monitored using computer-based algorithms.The energy industry can also be considered to be one huge and highly complex CPS.As a result, the energy industry is likely to be seriously affected by cutting-edge Industry 4.0 technologies [21] .Figure 3 illustrates the concept of the CPS in the energy sector as the convergence of energy law frameworks and information and communications technology (ICT) law frameworks, as presented by the global community during recent years.
Benefits of the electricity digital revolution
A plethora of new technologies has been developed as a result of the digital revolution occurring in the electricity sector, along with exponential increases in both data processing and storage capacity.Many advantages have accompanied this change and are summarized in the following [22] : 1. Electricity utilities have further aided in addressing the grid instability and imbalance issues that have been partially exacerbated by the introduction of intermittent renewable energy sources.Widespread adoption of pre-emptive processes and much faster corrective actions have been made possible by implementation of real-time data monitoring.
2. The related interoperability of the various asset types, including renewable generation resources, energy storage facilities, and flexible loads, has been essential to this digital transformation.
3. The detection of process inefficiencies and equipment malfunctions at industrial sites has also been made possible using these data and monitoring-based approaches.Changes in business practices and replacement of outdated technologies with newer, more effective models are only two components of the solution.
Artificial intelligence (AI)-based software applications with higher levels of sophistication can also be used actively to optimize energy flows.
4. Reductions in energy consumption ranging from 13% up to 29% have been enabled by use of new technologies and waste reduction.This has resulted in a remarkable 4% reduction in total global CO 2 emissions.
5. Placement of major technological innovations is helping to improve the efficiency and sustainability of the energy sector.
6. Increased flexibility has been realized in operational procedures.7. Increased levels of personalization have been introduced into the services in an attempt to meet the requirements of customers.
8. The capability to obtain real-time, accurate information has been realized.9. Accurate and thorough monitoring of the entire supply chain, including generation, transmission, distribution, and commercialization, has been enabled.10.Process automation has improved the operational effectiveness of businesses.
11. Real-time supply and demand adjustments can aid in reducing the number of inefficient operations.
Challenges of the electricity digital revolution
Developed countries have reliable electrical infrastructures and minimal growth rates that allow them to focus on grid standardization, smart meter implementation technology development, and the interoperability of grid-connected and distributed renewable energy generation [23] .The business model for Energy 4.0 offers various advantages, but there are also several challenges that must be addressed to enable a successful transition toward a more sustainable and human-centric Energy 5.0 business model.Therefore, the key challenges and issues that must be addressed by both the developed and developing countries to realize the full range of benefits of SG implementation are discussed hereafter [24] : 1. Technology Development: Over the past decades, ICT has advanced significantly [25] .However, to make the grid smarter, a brand-new communication infrastructure that is highly reliable and attack-resistant must be constructed, either separately from or integrated into the existing World Wide Web.Advanced sensor systems will be created and implemented in both smart buildings and the grid to measure phases, collect consumer consumption data, control automatic circuit breakers to ensure minimal disruption, and perform peak shaving of electrical appliances.It is thus necessary to develop and implement cutting-edge components, including smart appliances, smart meters, effective energy storage devices, high voltage DC transmission devices, and flexible AC transmission system (FACTS) devices [26] .
2. Quality Power to All Households: In the coming years, significant expansion of the power system network will be required to ensure reliable supply of electrical energy to all households.To realize the vision of the SG, the quality of the supply must also be guaranteed.Therefore, to ensure that a high-quality supply to all households is maintained, the current grid will have to be upgraded and expanded.In addition, to reduce the supply gap during peak hours and peak energy costs, distributed renewable energy generation and the ability to save money by shifting loads from peak periods to off-peak periods should also be encouraged [27,28] .
Reduction of Transmission and Distribution Loss (T&D)
: T&D losses will be minimized to meet international standards.Technical losses caused by a weak grid, financial losses, and a decline in collection efficiency are the main factors that influence the T&D loss [29] .
4. Interoperability and Cyber Security: An advanced metering infrastructure (AMI) and SG end-to-end security, a revenue metering information model, building automation, inter-control center communications, substation automation and protection, application-level energy management system interfaces, information security for power system control operations, and phasor measurement unit (PMU) communications are among the interoperability standards that have been created by the National Institute of Standards and Technology in the United States (including intelligent electronic devices or IEDs) [30] .
Consumer Support:
The lack of consumer awareness of problems in the power sector is one of the main obstacles to the implementation of the SG.Therefore, to reduce peak load consumption and encourage distributed renewable energy generation, consumer support for SG implementation will be essential.
Intelligent grid implementation will raise the quality and consistency of the power supply.It will also ensure that utility customers will have an easy-to-use and transparent interface, additional options, including green power, and the ability to save money by shifting their loads from peak times to off-peak hours.Nevertheless, to benefit from the SG on both individual and national levels, consumers must be aware of new technologies and support their utilities [31] .
Industry 4.0 technologies and electrical industry
Industrial Internet of Things (IIoT) sensors can be used by electric companies to collect behavioral data about their assets.Machine learning (ML) algorithms and big data techniques can then be used to analyze this information along with the data acquired from the rest of the power network to predict problems and assist operations managers in determining when to maintain or replace a network asset.Knowing when to perform equipment maintenance increases the equipment's lifespan and reduces the number of truck rolls, the numbers of field personnel deployed, and the material stock, thus contributing to financial savings.Electric companies are familiar with use of sensing equipment to monitor their assets and have used sensors in their operations for many years.These sensors are used to monitor a variety of parameters, including load, voltage, phase, temperature, and oil viscosity, and they also give Supervisory Control and Data Acquisition (SCADA) system operators advance notice of equipment failure.However, there are two main distinctions between the IIoT devices suggested by Industry 4.0 and the existing sensing and actuating devices that are present in the power grid [32] .
First, IIoT devices are much simpler to deploy in larger quantities in any piece of equipment or at any point in the network because of their smaller sizes, lower power requirements, and lower costs when compared with the current devices.IIoT sensors represent the best way to gather the data required for predictive maintenance plans for older assets and for areas in the network that were not previously monitored because most legacy equipment does not contain embedded sensing devices [33] .
The second distinction is that the data from the sensors are now delivered using the Internet protocol rather than through private area networks (PANs) or local area networks (LANs).This quality is essential for rapid and economical deployment of these devices across the entire power grid.It has been predicted that the investment required to upgrade an existing communication network will be at least 60% of its initial cost to compensate for the number of sensors and the amount of data required for an application on this scale.By relying on Internet service providers (ISPs) for management of the communications and utilities, the enormous costs associated with the development, upgrading, and maintenance of private communication networks are shifted to an outside company who have a core competency of communications and can thus provide better service at lower cost [34,35] .
Therefore, when moving onto predictive maintenance plans, electricity providers have the best available options because of emerging technologies such as IIoT and ML.A typical conceptual framework for enabling predictive maintenance in the electric industry via use of Industry 4.0 technologies is depicted in Figure 4.
Sustainable development goals for sustainable energy & industrial development
The 2030 Agenda, which is the main document intended to direct global efforts in sustainable development until 2030, was adopted at the UN Sustainable Development Summit in September 2015 [36] .The agenda lists 169 additional specific targets in essential development areas, including poverty, water, energy, education, gender equality, economy, biodiversity, climate action, and many others, in addition to 17 goals that are known as the Sustainable Development Goals [Table 2].More specifically, with regard to SG development, the following SDGs, and 7, 9, and 13 in particular, are relevant.
A detailed overview of the Smart Grid
The introduction of the SG signals the beginning of a new era of energy industry dependability, availability, and efficiency, which will be beneficial for both the economy and the environment.Several benefits will follow the creation and implementation of the SG [37] , and one of these benefits is more effective electrical power distribution, which is made possible by using algorithmic approaches that consider current and future demand along with energy production and consumption.Energy providers will be better able to track and predict grid malfunctions and act rapidly as a result of their close monitoring of the grid components, thus minimizing power disruptions (e.g., power outages).The cost of power for the consumers can then be reduced as a result of the reduced numbers of operations and management expenses required for the utilities.Additionally, better energy management and distribution among the grid users can reduce peak demand, thus enabling energy providers to lower electricity prices further.In addition to the benefits listed above, the SG encourages integration of extensive renewable energy systems (e.g., solar, wind, and hydrogen resources).When the IoT is integrated, it becomes essential to integrate renewable energy systems for two reasons.Distributed power generation comes first (i.e., decentralized power generation).Second, because customers contribute actively to the development of power consumption plans, they are more effectively integrated into and involved in the power distribution process.Finally, one critical issue that will require careful design and implementation is security against cyberattacks.To manage cyberattacks against safetycritical systems, engineers can develop and implement security frameworks with the aid of an SG.Based on the authors' recommendations in reference, the DO-178B aviation standard may be helpful [38] .
Industrial product service system
Europe is the region that has the highest energy consumption among manufacturers, accounting for approximately 25% of global energy consumption [39] .This category includes two different business types: small and medium-sized enterprises (SMEs) and large enterprises, with SMEs making up 99.8% of all businesses in Europe [40] .To satisfy customer demands and requirements while also increasing product value, manufacturers have recently been moving away from pure manufacturing via mass production toward a more flexible and personalized production approach for their customers.The evolution of manufacturer servitization is strongly correlated with the IPSS model, which is a hybrid dynamic system that combines the physical products and services of a single company [41] .The following two factors have had an impact on successful adoption and use of IPSSs: 1.The long-term relationship between the supplier and the customers, which is essential because the services rely on two-way interactions between the customers and the supplier; and 2. the ICT that will support appropriate use of the available services [42] .
Digital twin of Smart Grid
With a potential market size of $15 billion by 2023, the Digital twin (DT) was first used by Grieves (2015) [43] .The DT has been rated strategically as one of the top ten technologies of 2018 (subject to predictions on future research trends) [44] .The DT, as one of the technological pillars of Industry 4.0, is a virtual representation of a valuable or physical asset, e.g., a service, product, or machine, with models that can alter behavior using real-time data and analytics supported by visualization tools and human-machine interfaces linked to the condition of the monitored object (e.g., a machine) [45,46] .With regard to the existing research, the authors in created a DT for real-time analysis of complex energy systems by simulating the grid states and then assessing the grid's effectiveness using ML algorithms [47] .A DT-based platform for smart city energy management has also been presented in the literature [48] .To benchmark a building's energy efficiency, smart meters are used to gather asset data that is then fed into DT virtual building models.Frameworks for energy demand management based on DTs have also been presented and discussed in the literature [49,50] .
The literature investigation indicates that Industry 4.0 has greatly improved advanced simulation technologies and techniques such as DTs.Additionally, DTs are being incorporated into power distribution grids, which will make it easier to manage and regulate the energy supply network [51] .
IPSS business models of the energy sector
Modern society and the economy are dependent on energy.Much of the current industrial sustainability agenda is defined using eco-initiatives, eco-innovations, and eco-efficiency.Decarbonization, increased decentralization, and increased digitalization of energy systems are driving the current rapid changes in the energy sector.Larger consumers such as manufacturing firms are also involved in this effort, which is intended to create more sustainable energy production and consumption systems through implementation of service-oriented business models [52] .Energy providers are altering their IPSS business models to add value to their offerings and increase their competitiveness and sustainability [53] .IPSSs are business models for reliable product and service delivery that allow for cooperative product and service deployment and consumption.However, PSSs have now been integrated into a wide range of scientific disciplines, including business management, ICT, and manufacturing.Energy companies use cloud systems, IoT, and big data analytics to combine their centralized and distributed energy systems into a more complex system.To maintain the high fidelity of constant energy supply while ensuring that the electricity supply remains competitive and affordable, energy sales companies (ESCs) must also become more digitalized.The manufacturing sector (demand) and the energy sector (supply) must therefore work together more closely.Additionally, creative approaches will be required to adapt the electricity market to distributed energy generation while also enabling the industrial sector to switch to energy-efficient manufacturing.The main goals are to place the consumers at the center of the energy system and to ensure that they can then take advantage of the cutting-edge energy services that are available.
Business models address how an organization defines its competitive strategy through the design of the goods or services that it provides to its market, how it sets its prices, how much it costs to produce its goods or services, how it sets itself apart from rival organizations through its value proposition, and how it connects its value chain to those of other organizations to form a value network [54] .
The history of PSSs, the current state-of-the-art, and potential directions for future PSS research were all presented well by Meier et al. [55] .The authors also stated that the market proposition, customer requirements, and environmental impact are major defining factors.Reduced environmental impact, differentiation, attainment of competence, and production efficiency are some of the main advantages of PSS adoption.The PSS value proposition strategy is regarded as one of the innovations that will be required to advance society toward more sustainable futures as a result of extensive research [56] .
Production scheduling in the energy sector
There is growing interest in application of manufacturing scheduling as a way to reduce energy costs.One important but difficult situation is the case of scheduling of an industrial facility that is subject to real-time electricity pricing [57] .The manufacturing sector faces new challenges that these contemporary products and systems are unable to address successfully [58] .Energy-efficiency techniques and services that regulate electricity demand and optimize power consumption have thus been proposed to meet these challenges.To change the amount and/or timing of the energy consumption, industrial energy demand management (EDM) involves systematic actions being taken at the interface between the ESC and the industrial consumer [59] .
On the industrial consumer's side, the EDM activities include responses to energy price signals and production adjustments that result in energy demand flexibility (EDF).The advantages of EDF include lower power consumption costs and greater room for intermittent renewable energy sources.Energy demand response (EDR) represents shifting of energy consumption to a different point in time or to different resources, in contrast to energy efficiency, which is intended to reduce overall energy consumption [60] .Explicit and implicit schemes are the two main complementary approaches to EDR.In the first scheme, customers are rewarded specifically for their flexibility (e.g., free consultancy).An actual energy cost reduction is provided in the second scenario.Effective scheduling tools are essential for industrial EDM, where complex manufacturing processes are involved, because of the strong correlation between energy availability and energy price.Planning and running energy-efficient and energy-demandflexible production systems necessitates in-depth understanding of the energy consumption behavior of the system components, the energy consumption of the production processes, and techniques to evaluate system design alternatives [61] .In addition, a personalized-real time pricing (P-RTP) system architecture structure has been proposed [62] .Only users who initiate the P-RTP would receive an equal distribution of the energy cost savings.As a result, the proposed system reduces energy costs significantly without sacrificing the welfare of the electricity users.It is concluded that all the approaches mentioned above have a common goal: the generation of flexible, adaptable, and practical production scheduling.The electricity grid typically provides the energy required to run industrial machinery.However, the ways in which the machinery uses materials and energy can be wasteful [63] .Studies have therefore been conducted on energy-supply-oriented production planning [64] .Similarly, Biel and Glock [65] presented a literature review of decision support models for application to energy-efficient production planning and described how taking energy consumption into account during production planning can lead to more energy-efficient production processes.It is evident from the available literature that integrated approaches would be advantageous for all parties involved [66] .
Consumer vs. prosumer
Conventional electrical grids are based on functional integration of the energy producers and consumers.However, the new characteristics of SGs offer the possibility of development of sustainable, economical, and efficient energy supplies to the customers [67] .This model thus encourages the consumers to engage in the grid's operation and management, and to contribute to the energy distribution process by producing, selling, or sharing through the grid.This means that they represent important components for the grid's functionality and transform into "prosumers", who optimize their economic-energy decisions based on their individual energy requirements [68] .A prosumer is an energy user who produces energy from renewable sources such as photovoltaic arrays or wind turbines, rather than counting on the power plant's supply alone, and shares this energy with the grid's other consumers.Therefore, a prosumer can be recognized as a stakeholder who uses electrical power and also contributes to the grid by generating power at a certain point in time.The grid follows the bidirectional data flow and energy flows between the stakeholders, analysis of which may provide important information for the electrical grid function and for energy distribution optimization [69] .Overall, prosumers, smart information, bidirectional communications, and advanced analytics are regarded as the basic components of the SG.The main characteristics of the prosumer's profile can be summarized as (i) energy production; (ii) energy storage and sharing; (iii) energy consumption; and (iv) peer-to-peer transactions.
However, prosumers are differentiated from consumers because they are considered to be an advanced version of the latter and provide significant advantages for both their energy management and the entire grid.Table 3 highlights the main differences between the two energy client profiles described above.
Smart Grid 1.0 and Smart Grid 2.0
In the literature, two generations of SGs have been proposed.Both generations share the same goals, i.e., improving power distribution architectures to support real-time operation and thus achieve greater resilience and adaptability for an SG within a smart city environment.However, the two generations are differentiated by the fact that Smart Grid 1.0 mainly focused on technological evolution of the existing infrastructure based on the recent technological advances from both Industry 4.0 and digital technologies, including ICT.In contrast, Smart Grid 2.0 is mainly focused on the involvement of the customers in a variety of SG operations, including power generation, storage, distribution, and marketing.Therefore, as part of the framework for Smart Grid 2.0, the use of a peer-to-peer (P2P) architecture is imperative.Furthermore, Smart Grid 2.0 also focuses on distribution automation (DA) and an AMI.The DA can provide a self-healing, digitally controlled network to ensure reliable electric power delivery.The concept also encompasses demand response, smart home automation, distributed generation, distributed storage, and automated control.It is stressed that Smart Grid 2.0 is regarded as the energy Internet (EI) from an information technology perspective [70] .Although the current electrical grid is being transformed into an SG, open energy or EI is rapidly gaining popularity.An EI is an example of this trend, which is also being referred to as the Smart Grid 2.0 era [71] .
Consequently, the role of the intermediaries is eliminated.Furthermore, the second generation of SGs supports improved data acquisition and cybersecurity mechanisms to ease the development of more robust and precise decision-making tools.In Table 4, the key differences between Smart Grid 1.0 and Smart Grid 2.0 have been compiled and categorized based on their domain of interest.
Blockchain in Smart Grid operation
Among the challenges listed in the Introduction, it has also been proposed that the integration of blockchain technology, because of its advantages and its development during Industry 4.0, will enable The Multi-layer architecture enables peer-2-peer communication over the Internet Protocol engineers to provide additional functionalities in the SG.Therefore, in this Section, the challenges and opportunities of blockchain technology are discussed, along with the steps required for implementation.Briefly, blockchain is based on the creation and constant updating of a distributed ledger following a common consensus policy.Essentially, the adoption of such a technology will make it easier for multiple stakeholders within a network (independently of the network's size) to maintain a common track of the exchanges that take place within the network.The process above can easily be paralleled and implemented within an SG [Figure 5].
In the work of Dehalwar et al., the authors proposed a methodology for integration of distributed ledger technologies and techniques in an attempt to improve the management of an SG by building a trust management policy [75] .Similarly, Guo et al. also investigated the topic of blockchain within the SG environment [76] .Among the key findings of this literature review, the authors stated that this technology will enable additional managerial functionalities by focusing on the decentralization of SG management, and it may be extended beyond infrastructures to be used in electric vehicles as well [77] .Figure 4 presents a generalized roadmap for a blockchain SG.
One of the most important aspects of blockchain technology is smart contracts, which can enable energy providers to encode market rules and by extension to ease automation of the pricing process at the individual prosumer and microgeneration levels [78] .As a result, with the adoption of a smart contract policy, complex power distribution networks and their modules can be segmented with respect to direct participants to promote independent operation while also maintaining supervision and coordination without the need for third-party involvement.
Integrating AI into the renewable energy domain
The SG concept is one of the most challenging aspects of the realization of the smart cities of the future.In general, AI has proven to be a useful tool for imitation of human intelligence in machines and computers.Similarly, AI is useful in the energy sector, enabling processing of the vast amounts of data produced within an SG, and also coping with the grid's increasing complexity.In particular, in the renewable energy (RE) sector, AI provides better monitoring, operation, maintenance, and storage of the electrical energy produced.Consequently, the contributions of AI to the RE sector can be summarized in the following, and are illustrated in Figure 6 [79,80] : • Energy generation while considering supply volatility • Grid stability and reliability • Grid demand and weather forecasting • Grid demand-side management • Energy storage operations • Market design and management The key applications in RE systems are summarized as follows: • Smart matching of supply and demand [81] • Intelligent storage [82] • Centralized control systems [83] • Smart microgrids [84] Overview of AI techniques for the energy 5.0 distributed model Future power systems can support the incorporation of renewable energy resources (RERs) by using SG technologies.With high penetration of distributed generation into power systems and advancements in ICT associated with customer data, the electric power grid can be transformed [85] .AI-enabled smart energy markets can make it simpler to establish effective policy incentives and allow both consumers and utility companies to make decisions about their own consumption and generation in a way that reduces CO 2 emissions.Designing automation technologies for heterogeneous devices that can learn to adjust their consumption vs. pricing signals with user constraints, creating a means of communication between humans and the controllers, and creating simulation and prediction tools for consumers are among the challenges that face AI in electrical power systems.
Intelligent tools and methods are required to manage the system appropriately and to make timely choices as the energy sector becomes more complex.The problems of classification, forecasting, networking, optimization, and control methods can be solved using artificial neural networks (ANNs), reinforcement learning (RL), genetic algorithms (GAs), and multi-agent systems [86] .Because of the lack of sufficiently sophisticated automatically controlled resources, many system operations are still conducted manually or with only the most basic automation.However, introduction of AI into the grid system would lead to breakthroughs and provide new directions for development of the electrical grid.Figure 7 illustrates the entire distributed SG concept with AI methods and the use of techniques to provide cost savings and perform optimization.To optimize the controllable loads, Atef and Eltawil [87] described a GA for management of standalone microgrids (MGs).AI techniques are now providing considerably more effective and powerful ways to deal with the limitations of conventional grid systems as a result of advancements in computer power and the ready availability of data storage.Additionally, several security issues have arisen as a result of the application of distributed computing algorithms in SGs.Threats including physical attacks and cyberattacks can result in infrastructure failure, privacy breaches, service disruption, and denial of service (DoS) [88] .
Energy management system
As a result of the availability of sufficient customer data, computing power, and potential training algorithms, AI has now developed to the point where it can even predict customer electricity prices in complex environments.A comparative analysis of such intelligent schemes that concentrated on deep learning (DL) and support vector regression (SVR) has been presented in the literature [89] .In addition, demand response (DR) represents the deviation of the end users' normal electricity consumption patterns from those suggested by the utility, or the use of financial incentives to prevent the reliability of the power system being jeopardized as a result of peak demand [90] .Recently, several research works available in the online literature have concentrated on the use of AI techniques to predict energy demand patterns [91,92] .To overcome the uncertainty in future electricity prices, Al-Fuqaha et al. proposed an hour-ahead DR algorithm that used RL and an ANN and also took user comfort and consumption behavior into account [93] .The various energy management system types for use in SGs with the enabling techniques discussed in this paper are summarized in Figure 8.
Integration of the Smart Grid with other smart elements
It has become challenging for grids to control the demand for electricity for both household and industrial uses as a result of rapid population growth and the expansion of various industries.Short circuits and transformer failures are two issues caused by the increased demand for electricity at specific times of the day.To deliver electricity effectively, it is necessary to predict the customer consumption patterns to address the problems with traditional grids for electricity transmission.To that end, the concept of the SG has been introduced.An SG can transmit electricity based on the anticipated demand by using its intelligence to predict electricity demand.An SG can address many of the issues faced by traditional grids, including demand forecasting, reduction of power consumption, and reduction of the risks of short circuits, thus preventing the loss of lives and property [94] .The true potential of SGs has been unlocked by technological advancements such as the IoT [95] , fifth generation wireless networks and beyond (5G) [96] , big data analytics [97] , and ML [98] .
As shown in Figure 9, an SG has multiple stakeholders and can be connected to several other smart areas, including smart cities, buildings, vehicles, and power plants.
Energy trade market effect on production scheduling: an IPSS approach
Hardware, software, and services can be integrated into a distribution platform known as "energy-as-aservice" (EaaS).Such a solution should promote use of decentralized supply sources and renewable energy, provide demand control and energy storage technologies, and maximize the equilibrium between supply and demand [99,100] .This business model can also be applied to the SG.Customers can use the electricity and also generate, distribute, and trade resources with other consumers thanks to the SG, which enables bilateral communication and data transfer between electricity customers and the power grid.Cost and capacity are the two factors that define energy production.However, a rise in the price is often seen when the demand exceeds certain capacity thresholds.A trade-off between capacity, price, and consumer demand-satisfaction results from taking rising worldwide demand into account and using the available capacity in the best possible way at the lowest possible cost.As a result, energy demand consumption optimization, or smart usage, becomes crucial.Additionally, energy utilities are altering their business models by providing clients with energy-related services via energy service contracts, and thus raising the value provided.The business model used in this approach, which is based on the current business-to-business (B2B) strategy, is the provision of an energy-oriented IPSS [101] .Provision of energy can be regarded as a product service, with the contract acting as both a tangible good and a collection of intangible services.Through this collaboration, the energy provider and the customers gain mutually beneficial outcomes [102] .To that end, a system architecture proposed in the literature is depicted in Figure 10 [103] .
In this architecture, an ESC has a particular pricing strategy with distinct tariffs and is seeking to build an ecosystem to offer dynamic energy pricing.An EDM service, which is fed with the "productionconsumption profiles" of each individual industry within the ecosystem, is necessary to accomplish that .EDM: Energy demand management; ESC: energy sales company.
aim.Visualization and monitoring of the manufacturers' infrastructures, or the virtual circuit, through a wireless sensor network (WSN) that allows the EDM to know when, where, and why energy is consumed represents an innovation point of the proposed methodology.The production equipment is equipped with wireless data acquisition devices (DAQs) that transmit data to a cloud server.This part of the service's primary function aims to offer insights into the energy consumption of the machines and to connect that information to the energy consumption profile of the corresponding customer.As a result, data from the machines can be used to predict future energy demand, thus enabling identification of energy grid peaks.An alert is sent then to a specific group of high load industries as soon as a peak has been identified, requesting that they shift their load to smooth the estimated peak.However, if a company disregards the alert, they will be charged according to a high demand period tariff until they follow the suggested instructions for the method.As a result, an adaptive scheduling algorithm is activated on the manufacturer's side to guarantee grid stability and reduce the necessary energy consumption during peak demand periods.The data collected from the customers not only helps ESCs produce better predictions of their customer needs and increase the system's efficiency, but it also helps them to cut costs by managing the energy demand directly.The proposed IPSS system architecture shown in Figure 9 summarizes and presents the relevant steps.
A collaborative approach to energy-based offered services Mocanu et al. proposed an IPSS framework for the energy sector to create a smart service-based energy ecosystem, as illustrated in Figure 11 [105] .The proposed framework was validated by performing a real-world case study with a European electricity distribution company.
The framework above is based on utilization of energy services that can be used by both the energy suppliers and their customers.The proposed services are as follows: a power outage planner, an energy demand manager, and a calculator for the Environmental Impact (EIC).To enable operation of these services, customers must provide real-time and historical consumption data.The data are gathered from smart meters that have been installed in the customers' facilities.Depending on the embedded sensors, e.g., in the lighting virtual circuit, the cooling virtual circuit, or a machine virtual circuit, all the customer facilities are discretized into virtual circuits that represent some of the factory structures.Every customer will have a smart meter that measures their consumption in kWh and this meter is installed at the enterprise level and on every virtual circuit.The virtual circuit meters take their measurements in real time, while the enterprise lever meters take measurements every 15 min.The virtual circuit is a branch of the customer passport for each customer.The architecture is divided in two flows: one that is based on the information about electricity use from the smart meters, and another that is based on information about the grid power outages obtained from interactions between the suppliers and the customers.To generate an image of each customer's environmental impact, the EIC service converts each customer's consumption into emissions.Diagrams that show the quantities of the emissions from each customer can also be generated by this service.The supplier then notifies the customers of any power outages caused by grid issues through the power interruption planner and schedules scheduled maintenance in conjunction with the customers who will be affected.Every customer provides an estimate of how much energy will be used by each virtual circuit.When a submission is made, the supplier then verifies that the combined consumption of all customers in each individual grid segment is less than the peak level.If it is not, the supplier then alerts the customers of that section, suggests a schedule change, and then asks them to recheck to see if the power consumption has fallen below the peak level.If so, the supplier does not offer any additional recommendations.The third service, which is called EDM, is composed of these actions.The web-based platform on which all services are built relies on communication between the ESC and the energy consumer-industrial SME customer.Because the services are dependent on customer input, ongoing customer involvement, and the permission to install meters in their facilities, customer cooperation is essential.The electrical system of the energy client, the installed sensor systems, and an IT system infrastructure that gathers the data from the sensor systems comprise the energy IPSS.
Smart Grid based on IoT services
According to Gartner, 20.8 billion connected devices were in use worldwide in 2020 and this number is predicted to hit approximately 29 billion by the end of 2022, with approximately 18 billion of these devices being related to the IoT [105] .The SG, as a major consumer of autonomous connected devices, not only uses millions of IoT devices but also processes enormous amounts of data to improve its understanding of the SG network.On a global scale, approximately 23 million smart meters were installed and used in 2019 and this number is projected to reach 188 million by 2025, representing a compound annual growth rate (CAGR) of 6.6% during the forecast period.To increase the effectiveness of power networks, SGs are being implemented worldwide [106] .IoT and big data analytics are essential components of the SG.Therefore, it is imperative to integrate ML with IoT sensors and devices at various levels in the SG to enable analysis of the entire ecosystem and optimization of the important parameters, e.g., cost and energy resource balance, and ultimately to form an intelligent SG, as shown in Figure 12.
Application of the IoT in the SG may fall into one or more of the following categories: 1. IoT is applied to use different IoT smart devices for monitoring of the equipment status 2. IoT is applied to collect information from equipment with the support of its linked IoT smart sensors and devices through a diverse range of communication tools 3. IoT is applied to supervise the SG across the application interfaces
RESEARCH CHALLENGES AND SOLUTIONS
The major issues that traditional electric grids and the conventional method for electricity distribution have faced have seen significant improvement as a result of the integration of SG technology.For handling of high-dimensional data and ensuring efficiency in the data transactions throughout the energy supply chain, SG technology uses ML techniques, and more specifically, the subset of DL approaches.Additionally, through skillful control of consumer power consumption and by enabling energy-sharing facilities, SG technology places a strong emphasis on consumer satisfaction, thus transforming the consumer into a prosumer (producer and consumer).The SG still has some unresolved demand response (DR) management problems and upcoming difficulties that must be resolved for improvement of future electricity requirements.This section discusses the various research challenges, the current problems, and future directions for SG technology.Some of the most prevalent challenges and the corresponding technical solutions are summarized in Table 5.
The SG 2.0 grid architecture is composed of four layers, as illustrated in Figure 13, which also shows the fundamental elements of each layer.Specifically, the SG2.0 architecture comprises (i) a physical component layer; (ii) a communication and control layer; (iii) an application layer; and (iv) a data analysis layer.
The physical component layer embraces the sensory devices that enable data collection for real-time monitoring and decision-making, including IoT devices [e.g., smart meters, smart loads, smart sensors, phasor measurement units (PMUs), remote terminal units (RTUs), current transformers (CTs), and voltage transformers (VTs)].Shared data and information can be used to perform real-time monitoring and control of Smart Grid 2.0 thanks to the communication infrastructure and Internet-based protocols [110] .With a low level of assistance from a third-party service provider, the communication and control layer enables fast, dependable, and in-the-moment communication between devices (machine-to-machine, or M2M).
The application layer of the future Smart Grid 2.0 comprises the services related to electric vehicles (EVs) for trading with charging stations through P2P energy transactions, microgeneration and large-scale power plants that incorporate renewable energy generation, battery storage, and automated, efficient, and dependable transmission/distribution networks [111] .
The data analysis layer aids in performing cloud-centric data management, trend analysis, and grid control at an underlying level [112] .The Smart Grid 2.0 infrastructure includes data management, secure data routing, privacy preservation, and dependable storage.
The AMI manages the data that are derived from all the above layers because it collects synchronized smart meter measurements from both consumer and prosumer locations and is connected to the communication network.
LIMITATIONS IN SMART GRIDS TOWARDS ENERGY 5.0
An SG can be realized as a set of technologies that improve both existing (and new) electricity distribution networks by the provision of intelligence, which in turn promotes better usage of the electrical energy and higher distribution efficiency.The application of detection, measurement, and control devices with suitable communication channels to all parties involved in the electricity production, transmission, distribution, and consumption processes makes the intelligent network possible, and allows users, operators, and the automated devices to receive information with regard to the status of the network and to respond dynamically to changes in the condition/status of the grid.Although several benefits with regard to the implementation of SGs are listed below, certain limitations are also emerging [113,114] .
Smart Grid benefits
• Increased reliability • Operational efficiency and optimization of investment • Network operation and planning • Variations in the cost of energy • SGs offer the potential to reduce electricity consumption by 30% • Increased user responsiveness with personalized consumption
CONCLUDING REMARKS AND OUTLOOK
In this manuscript, a literature review of the current state of the art in the field of electrical energy generation and distribution has been presented and discussed.As part of this discussion, important concepts such as Energy 4.0, Energy 5.0, and SGs have been described.Among the key findings of the literature investigation, it was evident that the modern societal, academic, and industrial worlds are converging toward the design, development, and implementation of sustainable solutions to reduce environmental pollution and their environmental footprint.More specifically, several government organizations have presented and continue to work on such initiatives, with the focus on long-term and incremental implementation of technologies and techniques that were mainly introduced and developed within the framework of Industry 4.0.
The contribution of the manuscript also extended to detailed discussion of the technical frameworks required for integration of SGs into modern society and industry.These frameworks are based on previous implementations for smart and intelligent energy distribution management systems that were mainly used in the manufacturing domain.These implementations have created information islands that can enable the development of a broader SG by providing additional data.However, given that the frameworks above can be elaborated further, the creation of a wider SG that goes beyond the industrial sector is feasible.
Following consideration of the recent technological advances discussed in the preceding paragraphs along with the opportunities arising in the field, future work will be focused on further expansion of the existing frameworks to allow them to work in collaboration and form an industrial SG.Then, connection of the industrial SG with the academic SG proposed by the authors in a recent research work (not available online to date) will follow, with the aim of expansion of the SG and the functionalities provided to the energy producers.
Furthermore, more experimental tests will be required to combine the energy demands of clients on different tiers, to distribute the electrical energy more efficiently, and wherever possible to minimize the load to the grid and support the use of electrical energy derived from alternative power sources.One of the most important problems faced in the current grids is the lack of a suitable infrastructure to transform them into SGs.In this context, further elaboration of the existing frameworks will be required.
Figure 1 .
Figure 1.Network visualization of the literature topic areas.
Figure 2 .
Figure2.Key developments in energy in parallel with industrial revolutions[15]
Figure 3 .
Figure 3. Energy 4.0 business models as a basis for development of the Energy 5.0 business model.AbLaV: Load management, interruptible loads; BNetzA: German Federal Network Agency for Electricity, Gas, Telecommunications, Postal Services and Railways; EnWG: ESA (European Space Agency)-NASA (National Aerronautics and Space Administration) working group; IT: information technology; MiFiD: markets in financial instruments directive (2004/39/EC); REMIT: regulation on wholesale energy market integrity and transparency.
Figure 4 .
Figure 4. Migration of Industry 4.0 digital technologies to Energy 5.0 to enable predictive maintenance in the electric power industry.
Figure 5 .
Figure 5. Roadmap for blockchain technology integration within the Energy 5.0 framework.
Figure 7 .
Figure 7. Architecture of the Energy 5.0 distributed model.
Figure 8 .
Figure 8. Taxonomy of AI techniques for each management sector in smart energy.
Figure 9 .
Figure 9. Energy 5.0 as part of a smart and intelligent society.
Figure 11 .
Figure 11.Energy services framework system architecture.
Figure 12 .
Figure 12.ML and IoT integration for utility management and control in a smart grid.
Table 1 . Review methodology Database Scopus
Identification of publication type Journal articles only; conference papers, books, and book chapters Language English Choice of the field of publication Engineering and manufacturing relevant domains Screening & paper selection procedure Full paper available; article in English; article in the manufacturing domain; article related to maintenance Results 181 PSS: Product-service systems
Table 2 . Relevance of SDGs for sustainable energy and digital industrial development [23] SDG Description
the use of clean, affordable energy.By 2030, it wants to make sure that everyone has access to modern, sustainable, affordable energy.This entails significantly raising the proportion of renewable energy sources in the world's energy mix as well as doubling the rate of energy efficiency growth everywhere.With a focus on least developed nations, small island developing states, and landlocked developing nations, SDG 7 specifically aims to develop infrastructure and sustainable energy services for all in developing countries concerned with business, innovation, and infrastructure.It aims to create robust infrastructures, advance inclusive and sustainable industrialization, and encourage innovation.The promotion of inclusive and sustainable industrialization, as well as increasing the share of manufacturing employment and the proportion of manufacturing value added to the gross domestic product, are specifically highlighted as goals under SDG 9. A specific goal of SDG 9 is to increase access to Information Communication Technologies (ICTs) and provide universal, accessible, and affordable Internet access to the least developed nations by 2020 SDG 13.Climate action SDG 13 is focused on addressing climate change, including stepping up efforts to both mitigate it and adapt to its effects.The Paris Agreement, which went into effect on 4 November 2016 and represents a significant turning point for international efforts to mitigate and adapt to climate change, is closely related to the implementation ofSDG 13 SDG: Sustainable development goal.
•
Intelligent infrastructure technology for global energy distribution that could reduce greenhouse gas emissions from the energy sector Limitations• Increased cost caused by the replacement of analog meters with more sophisticated smart meters • Lack of regulatory standards for SG technology • Lack of official technology documentation | 2022-12-29T16:07:29.094Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "c8a62fff57f97424b74a6fafb1ea5646f632aa69",
"oa_license": null,
"oa_url": "https://www.oaepublish.com/gmo/article/download/5342",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "7b2e82ffb36aa4a515568abd723717d148bcff63",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": []
} |
265572226 | pes2o/s2orc | v3-fos-license | SoC Estimation in Lithium-Ion Batteries with Noisy Measurements and Absence of Excitation
: Accurate State-of-Charge estimation is crucial for applications that utilise lithium-ion batteries. In real-time scenarios, battery models tend to present significant uncertainty, making it desirable to jointly estimate both the State of Charge and relevant unknown model parameters. However, parameter estimation typically necessitates that the battery input signals induce a persistence of excitation property, a need which is often not met in practical operations. This document introduces a joint state of charge/parameter estimator that relaxes this stringent requirement. This estimator is based on the Generalized Parameter Estimation-Based Observer framework. To the best of the authors’ knowledge, this is the first time it has been applied in the context of lithium-ion batteries. Its advantages are demonstrated through simulations.
Introduction
In the current energetic scenario, decarbonisation of the electrical grid is a primary objective.In this context, energy storage plays a crucial role, as the use of Electric Vehicle (EV) is spreading [1,2] and the penetration of renewable energy sources in the grids increases [3].While there are several Energy Storage System (ESS), lithium-ion batteries (LIBs) are currently the most popular technology, as they are flexible, efficient and offer a good trade-off between energy density and power density [4,5].
In real applications, Li-ion batteries need to be monitored to ensure that their operation is within safety limits.Battery Monitoring System (BMS) ensure this objective by monitoring and managing several variables of the cell [6].Among all these variables, the most important one is the State of Charge (SoC), which can be defined as the relation between the remaining capacity of the battery compared to the nominal capacity.The knowledge of SoC allows system monitoring to occur while providing the users with information about how much energy is stored in the battery, facilitating the decision-making process.Simple examples of this can be the range of an EV or the charging/discharging management of a battery connected to a microgrid.A more concrete example is the charging process of a battery, in which SoC is crucial to set limits that prevent battery degradation.In this sense, fast charging [7][8][9] drastically approaches this limit, and it is in this application that a guaranteed estimation of SoC can ensure its viability.
However, the SoC cannot be easily measured by any common sensor.For this reason, the SoC estimation is a popular topic in the literature.The most traditional methods of SoC estimation are Open Circuit Voltage (OCV) measurement and Coulomb counting.The OCV measurement relies on directly measuring this variable and using an explicit function that relates the OCV and SoC to infer the value of the SoC.Nonetheless, as the OCV can only be directly measured when the battery is used in open-circuit conditions, this method is impractical in most applications [10][11][12].On the other hand, if OCV was measurable, it can easily be associated with SoC, as can be seen in Section 2.2.Coulomb counting is based on computing an integration of the exchange current entering or leaving the cell over time, thus providing a measure of the total extracted energy.This method requires precise knowledge of the capacity of the battery, as well as an initial SoC.If these parameters could be known beforehand, Coulomb Counting would surpass any other methods, but the impossibility of this, as well as the changes in capacity due to battery ageing, make the use of other options desirable.Moreover, the errors in the measurement and capacity are accumulated over the full process of integration.In [13], these issues-as well as other sources of errors associated with Coulomb counting methods-are described.Clearly, the limitations of these methods have motivated alternative methods to estimate the battery SoC.In this context, data-driven methods rely on data to predict the behaviour of the battery, linking the measurable information to key indicators including SoC.However, training and a large data set are required to obtain such a model; this often involves using machine learning algorithms such as Artificial Neural Network (ANN) or Support Vector Machine (SVM), among many others.ANN, a neural network model inspired by human brain structure, excels in capturing complex relationships within data.On the other hand, SVM, a robust classification and regression technique, is adept at handling high-dimensional data and finding optimal decision boundaries.Both ANN and SVM contribute to enhancing the predictive capabilities of models, but their effectiveness depends on the quantity and quality of the available SoC data [14].Alternatively, one can utilise observers [15] to estimate the SoC by means of a model of the battery dynamics.In this approach, the Kalman Filter (KF) and its many variations are popular and widely used, among many other observer families that serve this purpose.Observers can be linear, e.g., Extended Kalman Filter (EKF) [16,17] or H ∞ observer [18], or non-linear.The latter category contains estimators such as Unscented Kalman Filter (UKF) [19,20], particle filtering [21], Sliding mode observer (SMO) [22][23][24], High Gain Observer (HGO) [25], Adaptive Observer [26,27] or circle-criterion Observer [28], among other observers that can be used for this purpose.We refer to our previous work [29], in which all these observers were briefly reviewed.More information about SoC estimators can be found in the reviews provided by [30] based on lithium-ion batteries, or [31,32] for other electrochemical ESS such as redox flow batteries.
The use of observers requires the knowledge of a model that needs to describe the behaviour of the battery.Battery modelling is also a widely discussed topic, with a variety of models varying in complexity.The category of mechanistic models includes the models that consider the electrochemical phenomena inside the cell, modelling diffusion of the ions and electrolyte inside the cell.The Doyle-Fuller-Newman (DFN) model [33] is the first and most popular of this kind of model, and it is characterised by modelling the diffusion of lithium ions inside the battery using Fick's laws of diffusion, as well as considering the particles of the spherical electrodes.DFN is followed by a simplified version known as Single Particle Model (SPM), which has the same basis but only considers one particle in each electrode [34].More details on these mechanistic models are provided by [35].Up to this point, a simple definition of SoC has been provided, as a proportion of the current capacity against the maximum capacity of the battery.Such a simple definition is not sufficient in electrochemical models, where the definition of SoC takes into account the SoC at the bulk and at the surface of the battery [36], which is related to the concentration of Lithium in the battery.Keeping this in mind, ref. [37] reviews the observers used to estimate SoC considering these models.
Machine learning algorithms are also used in the context of battery modelling.A common application is the use of machine learning to extract a model and combine it with observers such as KF in order to estimate non-measurable states.Some remarkable examples are [38,39].Finally, equivalent circuit models (ECM) are a third type of modelling with high popularity in the literature.Two distinct subtypes can be identified.Electrochemical ECMs employ a combination of electrical components along with Constant Phase Elements to replicate the cell's frequency response [40,41].Phenomenological ECMs represent a purely electric circuit that emulates the dynamic behaviour of the battery.Due to their simplicity and low computational demands, these models enjoy widespread popularity.This document focuses on the latter type of model.More precisely, we propose an observer to estimate the non-measurable OCV, and then use the relation between OCV and SoC to infer the value of the SoC.
Besides selecting a proper observer structure that is coherent with the battery model structure, observers usually require additional assumptions in order to properly estimate the SoC.Some of these assumptions are related to the observability of the system; that is, that the measured signals are sufficiently rich in information to infer the values of the states.Most dynamic systems, in addition to states, contain parameters that must be adjusted.This can be carried out offline [42], giving rise to an identification problem, or online [43].Online parameter estimation requires that the input and output signals of the system satisfy the so-called persistent excitation condition.The persistence of excitation is a condition in which a system's input or stimulus remains active for a sufficient length of time that the presence of the unknown parameters produce a measurable effect on the system's behaviour, even after the input is removed [44].The joint estimation of the states and parameters of a dynamical system is a much more complex problem that usually requires the simultaneous fulfilment of the properties of observability and persistent excitation.
In this document, we acknowledge the difficulty of having a properly tuned model and, thus, we begin with an ECM of almost all unknown parameters.An adequate construction of the model in the state space framework allows the unknown parameters to be treated as states, thus allowing the joint estimation of unknown parameters and states.Following a similar approach, in [45], the authors provided a similar state-space representation for an ECM with the same objective of estimating the ECM.In [45] it was shown, by means of the observability Gramian, that the OCV is observable without knowledge of the circuit parameters, as long as the persistence of excitation is always satisfied.Observing OCV, SoC can be indirectly computed.In this document, the authors generalise this result for the ECM shown in Figure 1, which is currently more popular than the one shown in [45].
Additionally, in this work, we make the observation that the persistence of excitation condition is not always satisfied in real applications of Li-ion batteries.For this reason, and for the first time in the context of Li-ion batteries, we propose an estimation algorithm that requires a less stringent observability assumption.More precisely, we observe that the proposed state space model results in a linear time-varying system, which enables the use of Generalized Parameter Estimation-based Observers (GPEBO), an algorithm introduced in [46] that has never been applied, to the best of our knowledge, in LIBs.The major benefit of this observer is that the persistence of excitation condition is relaxed, which solves a major issue, as in many applications, this condition is not always met.The convergence and stability of all the aforementioned observers are based on an underlying observability assumption.In other words, if the system does not satisfy some minimal observability property, the mentioned observers cannot guarantee a coherent and stable estimation.In this sense, the GPEBO is able to guarantee an adequate estimation in scenarios where the mentioned observers cannot, which is the low-observability scenario of absence of persistent excitation.Consequently, there are some estimation problems that can be solved by the GPEBO and not the other observers.The main drawback of the GPEBO is that it can only be implemented in systems with a state-affine dynamics and linear output.Nonetheless, as will be shown later, the battery model falls within this model structure.
The remainder of this article is organised as follows.Section 2 formulates the ECM in state-space representation, describes the estimation objective and provides the theoretical frame in which the estimation can be achieved.In Section 3, the architecture of the observer used is described, defining its dynamics and formally establishing the conditions that allow the relaxation of the persistence of excitation.In Section 4, the GPEBO is compared against KF, estimating under different conditions the parameters of a battery (whose parameters have been obtained from [47]).The cases of persistent and non persistent excitation have been tested, as well as the presence of sensor noise.Finally, both observers have been compared with a vehicle driving cycle as a profile, which allows them to be tested in an EV scenario.At the end of the document, Section 5 provides a summary of the results and the conclusions of the work, as well as the future research directions on this topic.
Model Description and Problem Formulation
The development of Li-ion battery models encompasses a broad field of investigation, involving various approaches to modelling.The choice of a model structure depends on the degree of fidelity requirement relative to the actual physical behaviour of the cell.In this document, a phenomenological ECM has been used.
Equivalent Circuit Model
ECMs are typically composed of a voltage source, which corresponds to the Open Circuit Voltage (OCV); a series resistance R 0 ; and a variable number of RC nets (which determine the order of the model).Some other elements can be added to reflect other phenomena, such as hysteresis.While adding more RC nets may improve the accuracy, it also increases the computational burden and the complexity of system identification.In [47], three different ECM (first order, second order, and first order with hysteresis) are tested for several battery chemistries.While the model with hysteresis shows the best results, the difference between the first and second order was minimal for all chemistries.Additionally, the use of hysteresis is less common and adds stringent non-smooth nonlinearities to the model, which drastically increases the difficulties in the estimation design.
Hence, the model used in this article is the simple first-order model, which can be seen in Figure 1.A summary of the parameters is found in Table 1.Table 1.Parameters of the first-order ECM.
Parameter
Name Units Polarisation Capacitance [48] F When analysing the circuit in Figure 1, the equations that describe it can be written as: To ease the observer design process, it is convenient to rewrite (1) using the state-space formalism.Let the states be defined as follows or in a more amenable form Notice that states x 2 and x 4 are considered constant.This is a reasonable assumption, as these variables change because of the ageing of the battery, which happens over long time scales.Moreover, changes due to temperature variation are usually slow enough relative to the electrical time scale.
To summarise, the model presented in (3) presents only one known parameter, with a single input (the battery current) and a single output (the battery voltage).
Estimation Objective
The model presented in Section 2.1 does not have any directly measurable state.Moreover, the model is time-varying (as it depends on i bat ), and the only measurable signals are the output (V bat ) and the input i bat .Thus, these two measurements need to provide enough information to perform the estimation of the four states.Thus, the estimation objective to generate an estimation of the states, x, such that the following holds where ε > 0. In this estimation objective, we already assumed the effect that the presence of sensor noise and unknown parameters may have on the estimation accuracy.We remark that, while it can be desirable to estimate the four states, it is the OCV one that the authors believe is more important.It has been mentioned that there is a direct relation between the OCV and the SoC, so if the OCV is estimated, SoC can be obtained, in the fashion of: This relationship has been widely studied in the literature, with many models describing it.In [11,49] several OCV-SoC models are compared, while the review provided by [50] not only compares several models but provides some selection metrics and methods for the estimation of the parameters.Within the realm of OCV-SoC models, some of the most common ones include the Shepherd model, the Nernst model, a blend of both, as well as semi-empirical equations formulated using polynomial or exponential terms.In [49] these models can be found.Another common method is the use of look-up tables.We remark that the selection of the OCV-SoC model falls out of the scope of this document, but we encourage the reader to the referred literature if needed.Here, we consider that the knowledge that the two variables can be related is enough to proceed with the estimation of OCV.
Limiting Observability Assumptions of Existing Observers
Prior to any observer design, it is crucial to study whether the proposed estimation problem is solvable or not in the first place.Indeed, we need to analyse if the measured signal contains enough information in order to reconstruct the states of the system.In the control theory community, this type of study is known as observability analysis.
A system of the form ( 3) is said to be observable if any trajectory of the measured signal y(t) is generated by a unique initial condition of the system x(0).Conversely, if there are multiple initial conditions of (3) that generate exactly the same output signal for all time, then the system is unobservable.A natural consequence of unobservability is that the state estimation problem cannot be solved.
We remark that the observability of the system (3) strongly depends on the current profile, i bat , which is introduced on the battery.For instance, if we fix the current at a constant value i bat = −1, we can see that initial conditions x 3 (0) = 2 and x 4 (0) = 1 will generate the same output as the initial conditions x 3 (0) = 1 and x 4 (0) = 2. Consequently, the proposed estimation problem can only be solved under particular current profiles.With this fact in mind, we motivate the necessity of explicitly studying the observability of the system (3).To do so, first, we consider the state transition matrix Φ(t 1 , 0) of the system (3) as the matrix that relates an initial condition x(0) of the system with the value of the states at time t 1 ≥ 0, ref. [51] x(t 1 ) = Φ(t 1 , 0)x(0).
Now, in the same time interval [0, t 1 ], we define the observability Grammian of the system (3) as [51] W (0, t 1 ) := A well-known result from the literature ([51], Theorem 9.8) is that the initial condition x(0) will be uniquely determined by the measured signal y in the time interval [0, t 1 ] if the observability Grammian W(0, t 1 ) is invertible.That is, the system (3) is observable if the current profile sufficiently excites the dynamics of the battery and guarantees that the Grammian is invertible.
This observability analysis is a well-known result in the control theory community; moreover, it has already been performed in similar equivalent circuit models, e.g., [45].What is not that well known is that the convergence of the Kalman Filter (and its variances) requires a stricter condition known as uniform complete observability [52].More precisely, the system (3) is uniform completely observable if there are some positive constants T > 0 and δ > 0 such that, for all t ≥ 0, the following is satisfied Although the definitions of observability and uniform complete observability are relatively similar, they present some technical differences with significant practical relevance.Pre-cisely, the observability of the battery model (3) only needs to be satisfied in a finite window of time [0, t 1 ], while uniform complete observability needs to be satisfied persistently in time and in a fixed window defined by T. To better understand this difference, consider the current profile represented in Figure 2. Example of a current profile that makes the system observable but not Uniform Completely Observable (UCO).In the first 5 s, the profile excites the battery system in a way that makes the system observable and complies with UCO definition during this time-frame, but in the last 5 s, as the current is constant, the UCO condition is not met in this second time-frame.Consequently, because of the first part, this current profile ensures the observability of the battery during the whole 10 s while not guaranteeing UCO.
In this profile, during the first 5 s, the current is the sum of two sinusoidals, which is enough to guarantee observability and uniform complete observability in the time interval t ∈ [45,50].Since this current profile makes the system observable during a specific time interval, the battery model will be observable during the full current profile.Nonetheless, after the second 50, the current is kept constant and the model stops being uniform completely observable.Therefore, even though the current profile in Figure 2 guarantees observability of the battery model, that is, the measured signals will contain enough information so as to estimate the unknown parameters, it does not guarantee uniform complete observability.Consequently, any Kalman filter implemented in a battery with this current profile is not guaranteed to converge.
We highlight that this difference between observability and uniform complete observability is of significant importance for Li-ion batteries and, to the best of our knowledge, has been missed in the estimation literature.Indeed, the importance of this difference is twofold.First, most estimation results in equivalent circuit models focus on Kalman Filters (and variations of the Kalman Filter) [29] which require uniform complete observability.Second, most Li-ion battery applications implement current with various excitation levels, for example, in vehicular applications, when the vehicle is stopped there is no excitation in the battery.Consequently, uniform complete observability is rarely satisfied in practice.
To solve this issue, in Section 3, and for the first time in the context of Li-ion batteries, we propose an observer that does not require uniform complete observability and has guaranteed convergence with the milder observability condition.
What If a Higher-Order Model Was Considered?
One may wonder what would happen if a higher-order model was considered.As a higher-order model would contain at least one more RC net, let us take a second-order model as an example, which adds another RC net to the circuit shown in Figure 1.A quick analysis shows that a second-order model with full unknown parameters is not observable.Precisely, it is not uniform completely observable and does not satisfy the interval excitation condition.We recall here that the system will be non-observable if there are multiple parameter values that generate an identical measured signal.Indeed, notice that if the parameter values of net 1 and net 2 were interchanged, the value of u bat would be identical; therefore, it would be possible to achieve two solutions with the same exact output, without a way to determine which values are correct.Therefore, high-order ECM do not satisfy any minimal observability condition if the full parameters are completely unknown.In this sense, some information of the unknown parameters should be included for our technique to be implementable.Let us be reminded that the only measurable variables are u bat and i bat .
In relation to using different ECM, for instance, the work in [45] studies the observability of a different type of circuit and shows that UCO is only satisfied for particular current inputs.In this sense, our approach could relax the UCO property to one with weaker observability.
Proposal
This section is dedicated to presenting the main result of the paper.That is, we present an observer for the battery system which only requires a mild observability assumption.The observer is based on applying the ideas presented in [46] to the presented battery model in (3).
The main idea of the observer is, first, to transform the state-estimation problem into a parameter-estimation problem [53].Second, the parameters are estimated through a parameter-estimator algorithm based on the recently proposed dynamic regressor extension and mixing (DREM) approach [54]; see [55] for a recent review on the topic.The combination of these two steps results in a estimator that relaxes the observability assumptions.
The next subsection is dedicated to explaining how the state estimation problem can be transformed into a parameter estimation one.
Transforming the Problem
Consider the battery model (3) and recall the definition of the state transition matrix in (6).The estimation objective described in Section 2.2.should also be considered Consider a copy of the battery model of the form ξ = A(t)ξ.(9) Notice that the solution of (9) can also be depicted through the state transition matrix (6).That is, ξ(t) = Φ(t, 0)ξ(0).Since we do not know the initial condition of the battery, in general, we will have ξ(0) = x(0), which means that the copy ( 9) is initialised at a different initial condition from the ground-truth model (3).Now, we define the error between the battery state and the copy model e := x − ξ.Then, since both the battery model and the copy of the model are linear, the dynamics of the error can be computed through the same state-transition matrix.More specifically, e(t) = Φ(t, 0)e(0) = Φ(t, 0)(x(0) − ξ(0)).(10) From this result, we can see that the states of the battery model can be computed as where θ := x(0) − ξ(0).Notice that the signal ξ(t) comes from the copy of the model ( 9), which can be run in parallel to the battery system; thus, is known.The state transition matrix Φ(t, 0) can be computed by running in parallel the following equation Therefore, the only thing that remains to be computed is the unknown parameter (that is, a constant value) vector θ related to the initial error between ξ and x.In other words, if we are able to estimate θ, we can reconstruct the states through (11).From this, we can see how the state estimation problem can be transformed into a parameter estimation one.
The next natural question is how to estimate the parameter θ.Indeed, from (11), we can deduce the following equality which can be rearranged as the following linear regression equation where Y := y − c(t)ξ and Ψ := Φ c(t) .Notice that both Y and Ψ are measurable signals; thus, what remains is exploiting these measurable signals and the linear regression Equation ( 14) in order to estimate the unknown parameter θ.This will be the focus of the next section.
Estimator Equations
There are plenty of existing algorithms that can be utilised to solve the parameter estimation problem in (14).Some notable examples are the gradient descent [44]; the leastsquares algorithm [56], with its variations; or high-order algorithms [57,58], as well as the adaptive parameter estimator of [43].Nonetheless, the convergence of all these algorithms requires what is usually referred to as a persistence of excitation condition.Indeed, we say that the linear regression in (14) satisfies the persistence of excitation condition if there exist some positive constants T > 0 and δ > 0 such that t+T t Ψ(s)Ψ(s) ds ≥ δI. ( Roughly speaking, the persistence of excitation condition is related to the fact that the signal Y should contain enough information to infer the parameters θ.Nonetheless, by recalling the definition of Ψ(t) = Φ c(t) , we can see that persistence of excitation the linear regression ( 15) is equivalent to uniform complete observability of the original system (8).Consequently, if we just implemented classical parameter estimation algorithms in (14), we would be unable to solve the observability conflict described in the past section.For this reason, this section proposes using a parameter estimator based on the DREM idea [55].Specifically, we propose using the algorithm presented in [54], which has been proven to converge under milder observability conditions and has already been used to relax the persistence of excitation condition in other electrochemical devices [59].
The general idea of the algorithm in [54] is to pass the measured signals Y and Ψ through a set of pre-processing dynamics and then compute a nonlinear adjugate operation over the post-processed signals.Then, a standard gradient descent is implemented over the resulting signals.More precisely, the structure of the estimator is as follows: where γ g > 0 and Γ = diag{γ 1 , . . ., γ 4 } > 0 are positive constants to be tuned and where det(•) and adj(•) refer to the determinant and adjugate of a matrix.Intuitively, the main idea of the proposed observer is to introduce the measured signals Y and Ψ to the pre-processing dynamics ˙θ g and Ω in (16) and then compute the nonlinear operations in (17) in order to generate the new signals ∆ and Y.Then, even if the original measured signals Y and Ψ did not present a persistence of excitation condition, the new signals ∆ and Y may indeed present such a condition.This allows the parameters to be recovered through the dynamics ˙θ in (16).
More precisely, in [54], it was proven that such an estimator has guaranteed convergence if the linear regression in ( 14) satisfies an interval excitation condition.That is, for a positive constant t c , the following matrix is invertible.Notice that interval excitation of the linear regression ( 14) is equivalent to the observability of the original system (3) as presented in the past section.Therefore, by implementing ( 16) in the considered system, for the first time, we can relax the uniform complete observability to a milder observability condition and still guarantee estimation convergence.
Numerical Simulations 4.1. Methodology
In order to analyse the effectiveness of the formulated observer, a series of numerical tests were developed.To perform this analysis, a digital twin of a real lithium-ion battery system was considered, which allowed us to mimic the expected measurable battery voltage, which was used, together with battery current, as input for the observers.Later, the estimated battery voltage was compared with the measured one, as well as the estimated states are compared to those of the digital twin.The battery current varied depending on the test performed.This can be seen in Figure 3.The digital twin is based on the lithium-iron phosphate (LFP) battery provided by [47], and its specifications are presented in Table 2.The main reason to use this work is that it provides a first-order model that has been calibrated and validated, making it possible to have a realistic idea of the values that the different parameters R 0 , R 1 and C 1 can take in real applications.These parameters were estimated for different SoC levels, but for our work case, they are considered constant and independent of the SoC.Therefore, the values that have been considered correspond to the average of all SoC level values, obtaining the following parameters for the first-order model: With this model in mind, we develop different studies in order to see if the observer is capable of correctly estimating the parameters defined in (19), which from now on will be referred to as the real parameters.In order to see the advantages of the proposed observer, it will be compared with the common and well-known Kalman filter (KF) technique.Furthermore, for a more exhaustive study, different scenarios will be considered.
The first case that has been analysed is one in which the current profile guarantees persistent excitation of the battery model.In this scenario, both observers should present good performance results.In the second case, a case of non-persistent excitation is studied in which better behaviour of the GPEBO should be observed with respect to the KF one.Finally, three more scenarios are considered which attempt to contemplate different phenomena or operating situations, such as the measurement noise phenomenon, the effect of not considering the OCV constant and the use of load profiles that may be demanded in real applications, such as Worldwide Harmonised Light Vehicles Test Procedure (WLTP).
All these experiments have been performed assuming that the value of the product between the resistor R 1 and the capacitor C 1 is known; that is, the parameter τ in (3) is known, in order to test an ideal case in which both observers should achieve satisfactory estimation.The product between R 1 and C 1 , for this particular case of study, is 95.5431 s.Hence, τ in Equation (3) acquires the value of 0.010469.The reason behind the selection of KF as a benchmark is that KF-based algorithms are very popular in the literature.We have used the basic KF because the model (1) is linear.Even though batteries are non-linear systems, the description used is that of a time-varying linear system.EKF is suitable for linearising non-linear systems and treating them in a linear way, but if we applied it, the result would be the same as KF, as the linearised-model would be the already-linear model we have considered.A different situation happens regarding SMO or other non-linear observers.Based on our knowledge, utilising non-linear observers in linear systems often results in a poorer performance than utilising a linear observer.For the particular case of SMO, the high order of the model would result in too-high sensibility to noise, which would greatly affect the estimation.
The equations of KF are as follows: where P is the state covariance matrix, R is the measurement covariance matrix and Q is the process noise covariance matrix.Finally, for each of the studied cases, two different tunings have been tested for each observation.We establish γ Q as the KF gain and γ g as the GPEBO gain.First of all, let it be noted that γ Q is not a gain itself, but an adjustment of the covariance matrix of the noise of the KF.A larger covariance matrix will, in most of cases, produce a more robust filter but at the expense of the convergence time.γ Q is obtained following Kalman's procedure and only depends on the characteristics of the noise.On the other hand, γ g is purely a gain parameter.The tuning of this parameter is a trade-off between sensibility to noise and convergence time.As can be seen, both observers have very different tuning processes, which make comparison difficult.Therefore, it is not possible to establish a solid criteria regarding which gain is high or low, as the magnitude is relative.
Case 1: Persistence of Excitation Current Signal
The first experiment consists of the estimation of the OCV (denoted by x 3 in the statement problem written in ( 2)) under an input current i bat that guarantees persistence of excitation.To ensure this condition of persistent excitation, different sinusoidal signals have been used, resulting in the current profile that can be seen in Figure 4a.Considering this current, a constant OCV of 3.3275 V and the system parameters defined in (19), the resulting terminal voltage u bat can be computed by means of the firstorder model defined in (1).The profile of this terminal voltage is shown in Figure 4b, in which it is possible to see the effect of the ohmic resistor R 0 and the RC net.
Using this current profile, a simulation in MATLAB was launched simultaneously with both KF and GPEBO observers to estimate the OCV state starting from null initial conditions.The KF observer was computed in its classical form according to [60], tuning the covariance matrix of the process noise Q.This covariance matrix was defined by means of the identity matrix multiplied by an observer gain defined as γ Q .With respect to the GPEBO observer with the structure presented in (16), it was programmed in MATLAB to tune the observer gain γ g .
The results obtained can be observed in Figure 5, in which it is possible to see how the estimated OCV converges to the real value in both cases.Looking at these profiles, it is possible to state that the GPEBO converges uniformly to the real value, while the KF estimation does not present this behaviour.These results fit with the theory explained in Section 3. On one hand, the KF ensures the convergence of the full parameter vector, creating a dependence between the individual elements.Therefore, it is possible to find parameters that converge with non-uniform behaviours according to the observer gains.This behaviour can be observed in the details of the KF profiles (in yellow) in Figure 5.As can be noticed, firstly, the estimated OCV increases from 0 V to 1.5 V in less than 10 s to later change the profile until the estimation converges to the real value at 700 s.Moreover, it can be seen that the time required to reach this initial point of 1.5 V is directly related to the observer gains.Using a γ Q of 0.005, the observer requires 10 s to reach the 1.5 V as can be seen in detail in Figure 5a, while this time is reduced until 1 s using a γ Q of 5, as seen in detail in Figure 5b.At this point, it is important to notice that the value of γ Q does not guarantee that all parameters will converge to the real values in a shorter or longer period of time.Looking at both Figure 5a,b it can be noted that the OCV converges to the real value at 700 s independently from the observer gain γ Q .
On the other hand, using the GPEBO proposed in this work, the behaviour of the observed dynamics is totally different.For this particular case, each parameter converges independently from the others following a uniform profile.This property can be observed in the details of Figure 5a,b that show the profiles of the OCV estimated by means of the GPEBO in red.Furthermore, for this observer the gain has a direct impact on the convergence time, making it possible to decrease the convergence time increasing the observer gain γ g .Figure 5a shows the OCV profile using a value of γ g = 0.1, where it is possible to see how the OCV converges to the real value in 60 s.This convergence time can be reduced using a greater value of γ g , as can be noticed in Figure 5b where the convergence time is 40 s with γ g = 100.
At this point, it can be concluded that under the condition of persistent excitation, classical techniques such as the KF observer works properly but do not allow a direct tuning of the convergence time.In counterpart, the GPEBO presented in this work has the advantage of satisfying these requirements while guaranteeing a correct estimation of the parameters.
Case 2: Non-Persistence of Excitation Current Signal
The second experiment that was carried out used a current profile which presents an interval that is not UCO, followed by the example presented in Figure 2 of the previous section.
The current profile selected consists of the same one used in the previous case, but introduces an interval of constant current of 3 A.This current profile can be seen in the following Figure 6a, while the terminal voltage of the battery considering this current appears in Figure 6b.
In order to see the performance of the GPEBO observer under the operating condition of non persistent excitation, we used the observer gain γ g = 0.1, which was small, to ensure that the required estimation time was greater than 60 s.For the case of the KF observer, we used a gain γ Q = 0.005.
As can be seen in Figure 7, in the moment in which the observable profile is interrupted, KF stops converging and the value of the estimated OCV remains constant with an important error with respect to the real value.For the case of the GPEBO, it converges to the real OCV without any apparent effect.As shown, the OCV approaches the real value from the second 70, although the constant current appeared 20 s earlier.Moreover, it should be highlighted that the estimated OCV follows its characteristic uniform profile.
(b) Terminal voltage u bat .Profiles of the real and estimated OCV using the KF and GPEBO using the non-persistent excitation profile presented in Figure 6.
Using this experiment has made it possible to validate the correctness of the proposed observer while taking the other states into account.Taking into account the model described in (3), aside from the the OCV state, there are three more states that can be estimated.The first is the voltage in the capacitor-resistor branch, which is denoted as u 1 .The second is the inverse of the capacitor value C 1 , which can be defined as elastance and expressed in units of 1/F.The final state is the resistor R 0 , which is connected in series with the capacitor-resistor branch.
Figure 8 presents the evolution of these different states mentioned.In this figure, it is possible to see how, by means of the GPEBO, it is possible to estimate their correct values.Using this study, it is possible to highlight the advantage of the proposed observer with respect to classical ones, when battery operating conditions with non-UCO profiles are used.In future studies, in order to simplify the analysis, only the estimation result of the OCV state, which is directly related to the SoC of the battery, will be presented.
Case 3: Sensor Noise
The next study that was developed consists of introducing measurement noise in the output signal.Thus, a distributed Gaussian random signal with 0 mean and 0.001 variance has been introduced to the u bat profile.
To perform this first scenario, the same current profile from the previous study was used, which corresponds to a sinusoidal current with a not UCO interval as can be seen in Figure 6a.Using this current and the measurement noise mentioned, the obtained u bat signal is the one shown in Figure 9.In order to see the effect of the measurement noise and how, by means of the observer gains, it can be reduced, different values of the observer parameter gains γ g and γ Q were used.The values chosen are the same ones from the first study, in which two different γ g were considered for the GPEBO, which correspond to 0.1 and 100, while the values for the KF observer are 0.005 and 5. Figure 10 presents the estimated OCV for both GPEBO and KF observers under the conditions of measurement noise described.
As can be noticed in Figure 10a, using a low gain γ g = 0.1 for the GPEBO, the OCV converges to the real value following a uniform profile without noise.With respect to the KF, using a γ Q = 0.005, it is possible to observe how stills do not converge to the real value when the non-UCO interval appears.Moreover, in the same time window used in the details of both KF and GPEBO profiles, it is possible to note how the OCV estimation for the case of the KF method presents significant noise.
If a higher-value γ g parameter for the GPEBO is used, the estimation presents more noise along its profile and it is possible that it does not converge to the exact real value.This behaviour can be seen in Figure 10b using γ g = 100.As can be observed, although the estimated OCV tries to converge more rapidly to the exact value, it finally presents a constant error which, despite not being very high, must have been taken into account.With respect to the KF results, introducing a gain γ Q = 5, it is possible to highlight the increase in the noise in the OCV estimation, compared to the previous low gain used.Taking into account these results, it is clear that, as in the vast majority of observers, there is a trade-off between the convergence time and the sensibility to measurement noise.For the case of the GPEBO, the analysis shows that a small γ g value must be selected in order to guarantee that the estimation will converge to the real value with null error.Concerning the KF observer, it is clear that due to the presence of a non-UCO interval, the estimation does not converge to the real value, but the influence of the measurement noise can be counteracted by decreasing the value of γ Q .
Case 4: Variable OCV
An important study that must be developed is one that analyses the behaviour of the observer when the OCV (directly related to the SoC) varies.According to the model presented in (3), the OCV is considered as a constant parameter.In this context, an adequate observer should be robust to this discrepancy between the reality and the model.
In order to perform this analysis, a different case from the previous experiments was considered.Indeed, instead of assuming a constant OCC, a sinusoidal OCV was used, in order to ensure that the battery remained in its operational region in search of a realistic scenario.
Similar to the previous experiment, the same observer gains were used for both GPEBO and KF observers in order to analyse their effect on the estimation performance.Figure 11 shows the results obtained for this experiment when the persistent excitation current shown in Figure 4a is used.
As can be noticed, the use of the GPEBO makes it possible to estimate the real OCV with practically null error when steady-state behaviour occurs.Using an observer gain γ g = 0.1, the detail of Figure 11a shows the existence of a very small error between the real value and the estimated one.This error disappears when a higher value of γ g is used, as shown in further detail in Figure 11b.
It is important to remark that the use of the KF does not guarantee the convergence of the observer to the real value of OCV.Although a persistent excitation profile for the current signal is used, the observer is not able to estimate the OCV with null error when it changes over time.For this particular case, the only effect of the observer gain γ Q is to push the estimation more quickly or more slowly at the beginning of the experiment, as seen in the first experiment carried out in Section 4.2.
Case 5: Vehicle Load Profile
As has been mentioned, one of the points of this work is to guarantee the state and parameter estimation of lithium-ion batteries under real operating conditions.For that reason, in the last-case scenario, the load profile proposed is a standard driving cycle WLTP, which is used to test range, fuel consumption, and emissions in light vehicles in the European Union.It applies mainly to passenger cars, vans, and some buses.For our case study, the load profile used has been extracted from the velocity profile of the WLTP driving cycle [61,62], which can be observed in Figure 12a.From this velocity, it is possible to obtain the current profile according to the model and parameters that define the dynamics of a vehicle ([63], Chapter 7); for instance, the Toyota Proace 50 kWh model [64].Finally, the current has been scaled using a factor 1/5000 to meet the requirements of the lithium-ion battery defined in Table 2.The profile of this scaled current can be seen in Figure 12b.
Using this current profile, a simulation has been launched that considers the OCV as constant in order to study if the observer can estimate its value.As can be noticed in Figure 12b, this current profile has some regions with constant null values.Thus, it is interesting to analyse how the observer behaves under this situation.Following with the same procedure developed in previous experiments, the same sets of observer gains were tested again, showing different performances depending on the gain values.Figure 13 shows the OCV estimations considering the low and high observer gains.As can be noticed, the use of the GPEBO correctly estimates the OCV with null error, while for the case of the KF, it does not reach the ground-truth value.
It is possible to see how the KF observer does not converge to the real value of OCV according to the details shown in Figure 13a,b, which correspond to the low and high observer gains, respectively.With respect to the GPEBO, it can be seen how, by means of a high observer gain γ g = 100, the estimated OCV converges to the real one with null steady state error.However, with a low gain γ g = 0.1, it is possible to see how there is a small error due to the fact that the estimation has not reached the steady state.Thus, it can be seen how a properly tuned GPEBO successfully estimates the real value, even though the convergence time is not short, when a realistic profile of the current is considered for the battery.One may wonder why, in this case, KF seems to converge faster than GPEBO, even if the latter converges to the true value and KF does not.The reason behind this is that GPEBO includes two dynamical systems that are connected in the cascade.The first is θg -dynamics, which pre-process the measured signals.Second is the θ-dynamics, which use the post-process signals to estimate the parameters.In this sense, intuitively, the GPEBO requires the θg to reach a large enough value for the θ-dynamics to start quickly converging to the true value.This happens around 500 s in Figure 13a.Alternatively, the KF includes a set of dynamics (the x-dynamics and the P dynamics) which run in parallel.Since the GPEBO has dynamics in cascade and the KF in parallel, the KF presents faster convergence during the initial times.A simple way of solving this is to initialise θg at a larger value in order to reduce the convergence time of the θg -dynamics.Nonetheless, in this work, we considered the worst-case scenario, in which we have no information regarding how to initialise this variable, so we initialise it at an arbitrary value of zero.
Conclusions and Future Work
In this document, a new SoC estimator has been developed based on the application of GPEBO, in a novel approach that merges the benefits of using a well-known, popular and linear model such as first-order ECM with an observer that guarantees accurate estimation even in the absence of persistent excitation.
It has been shown how, for the case with persistence of excitation, GPEBO works better than KF and for the non-persistence of excitation situation, KF is unable to converge, while GPEBO is not affected by the change in excitation.This performance is maintained when measurement noise is introduced, but in this situation, the gain needs finer tuning.A similar situation happens when the WLTP, a real profile, is introduced as a load to the battery.Again, KF is unable to converge but the GPEBO is highly dependent on the gains.Finally, for the case of a varying SoC, which is implemented through a sinusoidal variation of OCV, an analogue response is found, with an incapable KF and a capable GPEBO with a performance affected by gain.As a summary, it can be stated that in general, for an application such as the one considered, GPEBO works better than KF.
However, as shown by the simulation results, the tuning of the GPEBO is not trivial and requires careful treatment.On the other hand, once properly tuned, it performed successfully in all the experiments, showing its advantages against KF, which was used as a benchmark.
Considering these points, some future lines of research appear to be interesting: • Test of the performance against full unknown parameters.In the test performed, the time constant of the circuit, τ, was assumed to be known.We think that practical applications would benefit if the GPEBO did not need to know this parameter.
•
Experimental validation.Independent test to characterise the cell (as in [48]) to compare it against the estimated value of the parameters.
•
Experiment with temperature sensitivity.With temperature affecting the value of many ECM parameters, it would be interesting to test the performance of GPEBO with different temperatures.• Automatic tuning.Automatic tuning would solve the main drawback that we have experienced, which is finding a gain that ensures performance.
Figure 1 .
Figure 1.First-order ECM.u OCV is the OCV, u bat is the voltage at the battery terminals and u 1 is the voltage at the RC net.
Figure 2 .
Figure 2.Example of a current profile that makes the system observable but not Uniform Completely Observable (UCO).In the first 5 s, the profile excites the battery system in a way that makes the system observable and complies with UCO definition during this time-frame, but in the last 5 s, as the current is constant, the UCO condition is not met in this second time-frame.Consequently, because of the first part, this current profile ensures the observability of the battery during the whole 10 s while not guaranteeing UCO.
Terminal voltage u bat .
Figure 4 .
Figure 4. Current and terminal voltage profiles used to guarantee the persistence of excitation condition.the current is a custom profile defined by two sinusoidal signals of amplitude 2 and 3 A, and 10 and 5 Hz of frequency, respectively.The terminal voltage response is calculated according to this current profile and the model described in (1).
Low observer gains.
High observer gains.
Figure 5 .
Figure 5. Profiles of the real and estimated OCV using the KF and GPEBO for different gain observers γ for the persistent excitation profile shown in Figure 4. (a) Using low observer gains of γ Q = 0.005 and γ g = 0.1.(b) Using high observer gains of γ Q = 5 and γ g = 100.
Figure 6 .Figure 7 .
Figure 6.Current and terminal voltage profiles used to analyse the problem of a non-persistence excitation condition.Current is a custom profile that defined the first 50 s using two sinusoidal signals of amplitude 2 and 3 A, and 10 and 5 Hz of frequency, respectively, and the remaining time by a constant value of 3 A. The terminal voltage response is calculated according to this current profile and the model described in(1).
Figure 8 .
Figure 8. Profiles of the real and estimated states using the KF and GPEBO using the non-persistent excitation profile presented in Figure 6.(a) Voltage of the resistor-capacitor branch u 1 .(b) Elastance (inverse of capacity) of the condensator C 1 .(c) Resistance of the series resistor R 0 .
Figure 9 .
Figure 9. Terminal voltage u bat considering the current profile shown in Figure 6a and a Gaussian random measurement noise with 0 mean and 0.01 variance.
High observer gains.
Figure 10 .
Figure 10.Profiles of the real and estimated OCV using the KF and GPEBO for different gain observers γ considering the noise output measurement shown in Figure 9. (a) Using low observer gains of γ Q = 0.005 and γ g = 0.1.(b) Using high observer gains of γ Q = 5 and γ g = 100.
Figure 11 .
Figure 11.Profiles of the real and estimated OCV using the KF and GPEBO for different gain observers γ when the OCV is not constant and is defined by means of a sinusoidal signal of 0.2 V amplitude, 3.3 V mean value and 0.08 Hz of frequency.(a) Using low observer gains of γ Q = 0.005 and γ g = 0.1.(b) Using high observer gains of γ Q = 5 and γ g = 100.
Figure 12 .
Figure 12.Velocity and current profiles of the WLTP driving cycle[61].This profile has been scaled and transformed to current using the equation of the vehicle dynamics[63].
Figure 13 .
Figure 13.Profiles of the real and estimated OCV using the KF and GPEBO for different gain observers γ using the current profile shown in Figure 12b.(a) Using low observer gains of γ Q = 0.005 and γ g = 0.1.(b) Using high observer gains of γ Q = 5 and γ g = 100. | 2023-12-04T17:36:37.155Z | 2023-11-28T00:00:00.000 | {
"year": 2023,
"sha1": "5ca6aed3079ab263c86fe6114f1585a2ada4bda1",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2313-0105/9/12/578/pdf?version=1701186686",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "1586ae7878139af9cbdd04e0f912e5ebd4b28134",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": []
} |
259104757 | pes2o/s2orc | v3-fos-license | Staged Strategies to Deal with Complex, Giant, Multi-Fossa Skull Base Tumors
Given the complex and multifaceted nature of resecting giant tumors in the anterior, middle, and, to a lesser extent, the posterior fossa, we present two example strategies for navigating the intricacies of such tumors. The foundational premise of these two approaches is based on a two-stage method that aims to improve the visualization and excision of the tumor. In the first case, we utilized a combined endoscopic endonasal approach and a staged modified pterional, pretemporal, with extradural clinoidectomy, and transcavernous approach to successfully remove a giant pituitary adenoma. In the second case, we performed a modified right-sided pterional approach with pretemporal access and extradural clinoidectomy. This was followed by a transcortical, transventricular approach to excise a giant anterior clinoid meningioma. These cases demonstrate the importance of performing staged operations to address the challenges posed by these giant tumors.
Introduction
Surgical excision of giant tumors in the anterior, middle, and partially in posterior fossa presents unique challenges due to the extension of the lesions in the sagittal, coronal, and axial planes. The extension of the lesions varies, and the surgical approach is tailored accordingly based on the size of the tumor. Several strategies have evolved for excising giant tumors in this environment. These multiple strategies include pterional craniotomy, modified pterional craniotomy, the cranio-orbitozygomatic approach, middle fossa craniotomy through anterior transpetrosal and posterior transpetrosal approaches, as well as endoscopic endonasal and expanded approaches [1][2][3][4]. Furthermore, depending on the characteristics of the tumor, a combination of these approaches may be used to ensure a thorough resection of the tumor. Giant pituitary adenomas and anterior clinoid meningiomas are two types of tumors that often require complex surgical strategies for complete resection due to their unique anatomical characteristics and potential complications [5][6][7][8].
Pituitary adenomas are intracranial tumors that account for 5-14% of surgically resected lesions [5]. Common symptoms of these tumors include bitemporal hemianopsia, headaches, and endocrine dysfunction [6]. Pituitary adenomas are graded according to the extent of invasion of local anatomical structures. Grade I represents pituitary adenomas that are limited to the sellar region. Grade II represents invasion into the cavernous sinus. Grade III is characterized by the elevation of the dura of the superior wall of the cavernous sinus. Supradiaphragmatic-subarachnoid extensions are characteristic of Grade IV pituitary adenomas [9]. Pituitary adenomas can exhibit invasive extensions that follows anatomic pathways through or around dura of the sellar region, creating diverse tumor morphology [10,11]. The structural diversity of these tumors, combined with the intricate anatomical structures present in the anterior, middle, and posterior fossa, often makes the removal of giant pituitary adenomas a complex task [12]. The removal of pituitary adenomas may be associated with postoperative complications such as cerebrospinal fluid (CSF) leak, diabetes insipidus, additional pituitary dysfunction, visual deterioration, and hydrocephalus [13], while a subtotal resection can be associated with pituitary apoplexy in up to 5.65% of cases [14]. Mitigation of these symptoms has been attempted through the use of several different surgical strategies with the intent of reducing the morbidity of this operation. The microscopic transsphenoidal approach has historically been associated with lower morbidity compared to transcranial strategies. However, with the development of endoscope-assisted microneurosurgery, surgical risks have been further reduced. As a result, the use of the endoscopic endonasal transsphenoidal (EET) approach has flourished in recent decades [15].
Contemporary methods of EET surgery involve the resection of the middle turbinate, with or without dissection of a nasoseptal flap, resection of the posterior septum, and exposure of the sphenoid ostium [16,17]. Often, the initial stage is completed by the otorhinolaryngology surgery team, and the subsequent steps are completed in collaboration with the neurosurgery team [18]. The sphenoid ostium is opened by removing the anterior wall of the sphenoid sinus, which allows access to the sella turcica [19]. Further extension may be necessary, and the choice of approach (transplanum, transclival, pterygomaxillary, or transorbital) will depend on the morphology of the pituitary adenoma to ensure proper visualization of the tumor and infundibular region.
Meningiomas originate from arachnoid cells and are classified into three WHO grades based on their histopathology [20,21]. Of significance, Grade I meningiomas have an increased likelihood of developing at the skull base [22]. The symptoms associated with meningiomas are often non-specific (headache, seizure, cognitive change, vertigo, ataxia), but may involve cranial nerve deficits dependent upon tumor morphological distribution. When it comes to skull base tumors, cranial nerve deficits are more likely to occur [7,23]. Furthermore, anterior clinoid meningiomas are known to have the propensity to present with visual impairments and exophthalmos [24]. Due to their slow growth rate and tendency to present later in life, small meningiomas (less than 3 cm) are often left untreated and followed with serial imaging. However, larger meningiomas, particularly those causing symptoms, are typically surgically resected and treated with stereotactic radiation, based on the WHO grading scale [8].
Similar to pituitary adenomas, giant anterior clinoid meningiomas present unique challenges for complete resection. A significant challenge arises from the tumor's tendency to compress the neurovascular structures associated with the anterior clinoid process [25]. The extent of tumor invasion also plays a significant role in the challenges associated with completely excising giant anterior clinoid meningiomas. The Al-Mefty classification system is based on the microanatomy of the tumor and is divided into three groups. Group I clinoid meningiomas extend over the inferior aspect of the anterior clinoid process and encircle the internal carotid artery; group II clinoid meningiomas are derived from the superior portion of the anterior clinoid process and are covered by arachnoid; group III clinoid meningiomas are derived from the optic foramen [26]. The typical method used for resecting giant anterior clinoid meningiomas consists of a pterional transsylvian approach, although alternative methods including orbitocranial or cranioorbitozygomatic approaches combining both intra-and extradural techniques have been proposed for giant anterior clinoid meningiomas [27][28][29]. However, the utilization of these techniques varies and is primarily dependent on the unique characteristics of the tumor and the neurosurgeon's level of expertise.
Case 1
A 65-year-old female presented with encephalopathy secondary to a suprasellar mass with symptoms of chronic bitemporal hemianopsia, which had worsened over the course of three years. Prior to admission, she experienced a sudden deficit and could only perceive light and movement. Due to a deterioration in vision, urgent surgical intervention was rec-Brain Sci. 2023, 13, 916 3 of 14 ommended. A CT scan with contrast and navigation sequences revealed a heterogeneously enhancing mass measuring 6 × 2.4 × 6 cm that originated from the sella and extended inferiorly into the sphenoid sinus and anterior fossa. Additionally, there was a superior extension into the left corona radiata, clivus, and left middle fossa. The mass also encased the left internal carotid artery and extended to the bilateral cavernous sinus and also to the left crural, ambiens, and cerebellopontine cistern. A 3 mm rightward midline shift in the left lateral ventricle was noted due to mass effects, as shown in Figures 1 and 2. Given the recent visual decline, preoperative MRI was unable to be obtained and the patient was taken urgently for surgery. a side-cutting aspirator, and an ultrasonic aspirator. In the superior regions of the sellar component, patties were used to retract the arachnoid tissue, allowing for debulking of the tumor around the cavernous carotid arteries using endoscopic and microsurgical techniques. The midline segment of the tumor was removed, except the segments in the middle and posterior fossa due to the angle of approach, as shown in Figure 3.
The patient was brought back into the operating room for stage 2 a week later, which involved a left-sided modified pterional transcavernous and transsylvian approach to remove the tumor from the middle and posterior fossa. The procedure required careful dissection around the left supraclinoid carotid, left middle cerebral artery, left posterior communicant artery, bilateral posterior cerebral arteries, and basilar apex. The oculomotor nerve was fully decompressed from a cavernous sinus tumor, and a small segment of fibrous tumor was left attached to the left cavernous carotid artery. The surgical site was closed using a dura substitute, surgical glue, and fat harvested from the abdomen. Postsurgical imaging of subtotal resection is shown in Figures 4 and 5, and an illustrated summary of the staged approach is presented in Figure 6. At the 6-month follow up, the patient maintained light and movement perception, and the postoperative left oculomotor palsy had improved 6 months after surgery. No further deficits were encountered. A staged procedure was planned due to the tumor's lateral extension. Stress doses of steroids were given perioperatively and tapered to a maintenance dose. The patient continued with levothyroxine supplementation. The patient was taken to the operating room for stage 1 endoscopic endonasal tumor resection with the goal of resecting the intrasellar and suprasellar compartments of the tumor to alleviate pressure of the optic apparatus ( Figure 3). The sellar component of the tumor was removed using skull base ring curettes, a side-cutting aspirator, and an ultrasonic aspirator. In the superior regions of the sellar component, patties were used to retract the arachnoid tissue, allowing for debulking of the tumor around the cavernous carotid arteries using endoscopic and microsurgical techniques. The midline segment of the tumor was removed, except the segments in the middle and posterior fossa due to the angle of approach, as shown in Figure 3. The patient was brought back into the operating room for stage 2 a week later, which involved a left-sided modified pterional transcavernous and transsylvian approach to remove the tumor from the middle and posterior fossa. The procedure required careful dissection around the left supraclinoid carotid, left middle cerebral artery, left posterior communicant artery, bilateral posterior cerebral arteries, and basilar apex. The oculomotor nerve was fully decompressed from a cavernous sinus tumor, and a small segment of fibrous tumor was left attached to the left cavernous carotid artery. The surgical site was closed using a dura substitute, surgical glue, and fat harvested from the abdomen. Postsurgical imaging of subtotal resection is shown in Figures 4 and 5, and an illustrated summary of the staged approach is presented in Figure 6. At the 6-month follow up, the patient maintained light and movement perception, and the postoperative left oculomotor palsy had improved 6 months after surgery. No further deficits were encountered.
Case 2
A 74-year-old male presented with cognitive decline over several months, as well as memory and visual deficits, accompanied by a significant decline in balance. Imaging revealed a large extra-axial mass, measuring 4.3 × 6.3 cm, located at the right anterior clinoid process. The mass showed significant suprasellar and cavernous sinus extensions into the bilateral anterior fossa, middle fossa, and partially within the posterior fossa in retroclinoid space, overall resulting in significant compression of the optic chiasm apparatus and brainstem. There was significant vasogenic edema and obstructive hydrocephalus. This is illustrated in Figure 7.
The patient was brought into the operating room, and it was planned to resect the tumor in two stages during the same operation if it showed hard consistency on the first approach. During craniotomy stage one, a modified right-sided pterional, transzygomatic, and pretemporal approach was utilized, which included extradural clinoidectomy and optic canal decompression. This was followed by transsylvian dissection. The dissection was continued to the superior aspect of the cavernous sinus, allowing visualization of the anterior cerebral arteries and optic nerves. The optic canal was decompressed, and dissection was continued toward the internal carotid arteries and middle fossa, from which a portion of the tumor was resected. The tumor was then meticulously separated from the right internal carotid artery, the right middle cerebral artery, the lateral lenticulostriate branches, the bilateral anterior cerebral arteries (A1 and A2 segments), the right anterior choroidal artery, and the right posterior communicant artery. Then, the approach continued with tumor devascularization, which was achieved by capsule electrocoagulation and by placing aneurysm clips on two main arteries that supplied the tumor. Neurophysiology monitoring was used to provide standard assistance during the procedure and confirm stable somatosensory evoked potentials and electroencephalography during temporary clipping of the vasculature.
As the entire tumor showed a hard consistency, the superior aspect of the lesion was inaccessible through this approach. Therefore, stage two was initiated with a separate right frontal craniotomy using the same incision. A transfrontal, transcortical, and transventricular microsurgical approach was performed on the right side. The giant anterior
Case 2
A 74-year-old male presented with cognitive decline over several months, as well as memory and visual deficits, accompanied by a significant decline in balance. Imaging revealed a large extra-axial mass, measuring 4.3 × 6.3 cm, located at the right anterior clinoid process. The mass showed significant suprasellar and cavernous sinus extensions into the bilateral anterior fossa, middle fossa, and partially within the posterior fossa in retroclinoid space, overall resulting in significant compression of the optic chiasm apparatus and brainstem. There was significant vasogenic edema and obstructive hydrocephalus. This is illustrated in Figure 7. clinoid meningioma tumor was visualized through this approach and successfully debulked, exposing the anterior cerebral arteries, right internal carotid artery, and middle cerebral artery. The two large arterial tumor feeders with the temporary clips were then electrocoagulated and permanent titanium clips applied. After the tumor was sufficiently removed, fat was harvested from the abdomen and placed in the pterional area. Additionally, a right frontal external ventricular drain was left in place under direct visualization. Postoperative imaging is illustrated in Figure 8, while a summary of the staged approach is shown in Figure 9. External ventricular drain was removed on postoperative day 5. At 6 months after surgery, there was significant improvement in ambulation and cognition without additional neurological deficits. Pathologic examination of the tumor also confirmed a low-grade meningioma. The Ki67 indices were low, with mild focal elevation. E-cadherin, BAP-1, and PR stains were positive and GFAP stains indicated glial tissue along edges without any indication of brain invasion. The patient was brought into the operating room, and it was planned to resect the tumor in two stages during the same operation if it showed hard consistency on the first approach. During craniotomy stage one, a modified right-sided pterional, transzygomatic, and pretemporal approach was utilized, which included extradural clinoidectomy and optic canal decompression. This was followed by transsylvian dissection. The dissection was continued to the superior aspect of the cavernous sinus, allowing visualization of the anterior cerebral arteries and optic nerves. The optic canal was decompressed, and dissection was continued toward the internal carotid arteries and middle fossa, from which a portion of the tumor was resected. The tumor was then meticulously separated from the right internal carotid artery, the right middle cerebral artery, the lateral lenticulostriate branches, the bilateral anterior cerebral arteries (A1 and A2 segments), the right anterior choroidal artery, and the right posterior communicant artery. Then, the approach continued with tumor devascularization, which was achieved by capsule electrocoagulation and by placing aneurysm clips on two main arteries that supplied the tumor. Neurophysiology monitoring was used to provide standard assistance during the procedure and confirm stable somatosensory evoked potentials and electroencephalography during temporary clipping of the vasculature.
As the entire tumor showed a hard consistency, the superior aspect of the lesion was inaccessible through this approach. Therefore, stage two was initiated with a separate right frontal craniotomy using the same incision. A transfrontal, transcortical, and transventricular microsurgical approach was performed on the right side. The giant anterior clinoid meningioma tumor was visualized through this approach and successfully debulked, exposing the anterior cerebral arteries, right internal carotid artery, and middle cerebral artery. The two large arterial tumor feeders with the temporary clips were then electrocoagulated and permanent titanium clips applied. After the tumor was sufficiently removed, fat was harvested from the abdomen and placed in the pterional area. Additionally, a right frontal external ventricular drain was left in place under direct visualization. Postoperative imaging is illustrated in Figure 8, while a summary of the staged approach is shown in Figure 9. External ventricular drain was removed on postoperative day 5. At 6 months after surgery, there was significant improvement in ambulation and cognition without additional neurological deficits. clinoid meningioma tumor was visualized through this approach and successfully debulked, exposing the anterior cerebral arteries, right internal carotid artery, and middle cerebral artery. The two large arterial tumor feeders with the temporary clips were then electrocoagulated and permanent titanium clips applied. After the tumor was sufficiently removed, fat was harvested from the abdomen and placed in the pterional area. Additionally, a right frontal external ventricular drain was left in place under direct visualization. Postoperative imaging is illustrated in Figure 8, while a summary of the staged approach is shown in Figure 9. External ventricular drain was removed on postoperative day 5. At 6 months after surgery, there was significant improvement in ambulation and cognition without additional neurological deficits.
Pathologic examination of the tumor also confirmed a low-grade meningioma. The Ki67 indices were low, with mild focal elevation. E-cadherin, BAP-1, and PR stains were positive and GFAP stains indicated glial tissue along edges without any indication of brain invasion.
Giant Pituitary Macroadenoma Surgery
In 1992, Jankowski et al. [30] described the first successful endonasal endoscopic resection of a pituitary adenoma in three patients. This represented a major transition from the previously popular method of microscopic transsphenoidal resection via a sublabial or endonasal approach. The endoscopic endonasal approach has been associated with significant improvements in morbidity and mortality associated with the removal of pituitary adenomas [31].
The advancements in the resection of pituitary adenoma using EET resection have been well documented [32,33]. EET surgery has been reported to achieve resection rates greater than 80% in tumors with a volume of 18 cm 3 , as well as gross total resection rates up to 44% [34]. Postoperatively, there was a significant improvement in visual function (82%) and pituitary function (20-72%) in those who presented with pituitary dysfunction [33][34][35]. McLaughlin et al. concluded that the use of endoscopy allowed for the removal of adenomas in an additional 36% of patients, thanks to improved visualization. Furthermore, the use of endoscopy was accentuated in patients presenting with tumors larger than 2 cm, permitting the removal of 54% of pituitary adenomas [32].
However, significant complications have been reported with this procedure. As many as 37% of patients have experienced complications, which include sinusitis (13.7%), CSF leak (9.6-11.4%), and SIADH (4.1%), as well as headache, epistaxis, meningitis, and hydrocephalus in a minority of patients [33,35]. Additional clinical reports have detailed complications of diabetes insipidus at rates as high as 25-45.5%, with a minority experiencing ischemic stroke [34,35].
It is relevant to mention that the endoscopic endonasal approach, with an expanded transtubercular approach, can still be associated with a lower degree of resection when the tumor has a significant lateral extension, harder consistency, or cavernous sinus Pathologic examination of the tumor also confirmed a low-grade meningioma. The Ki67 indices were low, with mild focal elevation. E-cadherin, BAP-1, and PR stains were positive and GFAP stains indicated glial tissue along edges without any indication of brain invasion.
Giant Pituitary Macroadenoma Surgery
In 1992, Jankowski et al. [30] described the first successful endonasal endoscopic resection of a pituitary adenoma in three patients. This represented a major transition from the previously popular method of microscopic transsphenoidal resection via a sublabial or endonasal approach. The endoscopic endonasal approach has been associated with significant improvements in morbidity and mortality associated with the removal of pituitary adenomas [31].
The advancements in the resection of pituitary adenoma using EET resection have been well documented [32,33]. EET surgery has been reported to achieve resection rates greater than 80% in tumors with a volume of 18 cm 3 , as well as gross total resection rates up to 44% [34]. Postoperatively, there was a significant improvement in visual function (82%) and pituitary function (20-72%) in those who presented with pituitary dysfunction [33][34][35]. McLaughlin et al. concluded that the use of endoscopy allowed for the removal of adenomas in an additional 36% of patients, thanks to improved visualization. Furthermore, the use of endoscopy was accentuated in patients presenting with tumors larger than 2 cm, permitting the removal of 54% of pituitary adenomas [32].
However, significant complications have been reported with this procedure. As many as 37% of patients have experienced complications, which include sinusitis (13.7%), CSF leak (9.6-11.4%), and SIADH (4.1%), as well as headache, epistaxis, meningitis, and hydrocephalus in a minority of patients [33,35]. Additional clinical reports have detailed complications of diabetes insipidus at rates as high as 25-45.5%, with a minority experiencing ischemic stroke [34,35].
It is relevant to mention that the endoscopic endonasal approach, with an expanded transtubercular approach, can still be associated with a lower degree of resection when Brain Sci. 2023, 13, 916 9 of 14 the tumor has a significant lateral extension, harder consistency, or cavernous sinus extension [36], and an open transcranial approach has a significant role in its surgical treatment [37].
Surgical Pearls
Giant pituitary macroadenomas present a significant challenge due to their extension into various anatomical compartments. In our example case, the tumor was extended into three different anatomical compartments of the skull base involving the neurovascular structures. A meticulous review of all available images is necessary to plan the different steps of the operation. After obtaining appropriate medical clearance and ensuring stability, surgery should be performed promptly. With a significant visual deficit, the initial planned step was to debulk the tumor mass inferiorly through an endoscopic endonasal approach with a transplanum sphenoidale extension to alleviate ventral compression of the optic chiasm. A combination of microsurgical and endoscopic tumor dissection techniques was required. Usually, as in this case, the capsule of a pituitary adenoma is harder in consistency, requiring careful dissection from the anterior cerebral arteries, and it is of utmost importance to maintain the anatomical landmarks respecting the trajectory of the optic apparatus. Hypervascularity is a common feature of tumors with a hard consistency. To address this, a combination of different endonasal bipolar electrocoagulators may be necessary, including long and fine-tipped regular bipolar ones. The significant lateral, superior, and posterior extension of the tumor was the deciding factor for a staged operation, where a modified pterional, extradural anterior clinoidectomy, transcavernous, and transsylvian approach allowed access to all of these compartments. After opening the intradural space, an important goal is to establish a plan for resecting a giant tumor in sectors: (a) there were multiple critical anatomical landmarks for the anterior sector of the tumor including optic nerves and chiasm, anterior cerebral arteries, and anterior communicant artery region complex; (b) the central sector with suprasellar and intercarotid space, dealing with bilateral internal carotid arteries, left posterior communicant and choroidal arteries identifying normal vasculature from feeding tumor vessels; (c) posterior sector with tumor extension to interpeduncular fossa and left cerebellopontine angle cistern, dealing with bilateral posterior cerebral arteries (P1 segments), basilar apex, and thalamo-perforating vessels. If a tumor segment exhibits tight adhesions to neurovascular structures, making it difficult to identify a clear cleavage plane, we recommend avoiding the risk of neurovascular injury, requiring leaving some tumoral tissue in place. As part of our routine, we always keep aneurysm clips, clip appliers, and cerebral bypass instruments available in the operating room in case they are needed. During the dissection of the lateral wall of the cavernous sinus, it is important to identify the normal trajectory of the nerves, especially the oculomotor and trochlear nerves, before accessing either the roof or lateral wall of the cavernous sinus. The oculomotor nerve is highly sensitive to manipulation and requires adequate release from the oculomotor cistern and lateral wall of the cavernous sinus after peeling off the dura from the middle fossa. A temporary oculomotor deficit often improves within weeks or months after surgery. This possibility should be thoroughly discussed with patients and their families prior to the operation. The consistency and vascularity of the tumor within the cavernous sinus will determine the extent of resection required. Postoperative care in the intensive care unit, along with an adequate protocol for managing diabetes insipidus and individualized hormone replacement, is essential.
Giant Anterior Clinoid Meningioma Surgery
There is significant debate regarding the most effective strategy for removing giant anterior clinoid meningiomas, mainly due to the unique challenges discussed earlier in resecting these tumors [38][39][40][41][42]. Two popular techniques involve either a vascular or skull base perspective. The vascular strategy of tumor debulking involves dissecting the sylvian fissure to trace the middle cerebral artery to the internal carotid artery while removing the tumor and its associated perforating arterial supply [43]. However, this school of thought is commonly criticized due to the strain placed on the sylvian fissure [26]. The alternative solution to this issue involves performing a pretemporal dissection and an anterior clinoidectomy to expand the operating field while minimizing brain retraction at the sylvian fissure [44]. This technique is associated with several advantages, such as early optic nerve decompression, early identification of the internal carotid artery, and associated devascularization of the meningioma [44]. This is associated with a decreased rate of complications. The occurrence of postoperative vascular complications has been reported in 2% of cases; cranial nerve deficits occurred in 5.5% of patients; and the overall patient mortality rate was 1.2% [45].
Further modifications to the anterior clinoidectomy have been reported. This includes extradural, intradural, and hybrid approaches to this technique. However, the merits of each of these subtechniques are still debated [45]. The use of preoperative embolization has been reported, but it is also problematic. External carotid artery branches are safer to embolize, with limited opportunities for branches arising directly from the internal carotid artery. One clinical trial found a complication rate of 12% associated with this practice, which seemed to provide little benefit to the quality of patient postsurgical recovery [46]. Additional methods are still under exploration. One such method is the Dolenc approach, which involves an extradural clinoidectomy and transdural debulking of the tumor. A study reported that 67% of patients had better vision outcome [38,39]. The gross total resection rate was 30.4%, and partial resections were achieved in 34.8% of surgeries [38,41].
When considered as a complete entity, giant anterior clinoid meningiomas had a gross total resection rate of 64.2% [45], while 25% of cases resulted in subtotal resections [41]. The reported operative mortality was 6.7%, and recurrence was observed in 11.8% of cases [40]. According to Nanda et al., four out of thirty-six patients who underwent surgery for clinoidal meningiomas experienced recurrence, with a median duration of 89 months, and one patient required repeat surgery [41]. Furthermore, it is worth noting that gross total resections of group I giant anterior clinoid meningiomas were limited to only 11.8% due to the anatomical difficulties associated with this type of tumor [45]. The minimally invasive options such as the endoscopic endonasal transtubercular approach or endoscopic-assisted supraorbital key-hole approach are ideal for midline lesions such as tuberculum sella meningiomas [46], although those options have a limited role in anterior clinoid meningioma and even less so in a giant tumor given its anatomical skull base implications [42,47].
Surgical Pearls
Meningiomas located on the anterior clinoid process may vary in size. A comprehensive surgical extension involving the anterior, middle, posterior fossa, and intraventricular areas requires a detailed analysis of preoperative imaging for effective planning. This case example involved a lesion larger than 6 cm originating from the right anterior clinoid process with a significant suprasellar and lateral cavernous sinus extension into the bilateral anterior fossa, middle fossa, and a segment of the posterior fossa, with significant compression of the optic chiasm and brainstem. A preoperative angiogram is routinely obtained for meningiomas located in this area to assess the possibility of embolizing branches from the external carotid artery. This is because branches of the internal carotid artery pose a higher risk for ischemic complications. The angiogram is obtained to define the blood supply to the tumor, regardless of whether it is possible to embolize it or not. This can provide information on the degree of displacement of the normal vasculature, collateralization, the presence of posterior communicant arteries, and cross-flow through anterior communicant arteries. All of these factors, combined with preoperative magnetic resonance imaging and computer tomography, can help define the surgical strategy. Additional tools, such as neuronavigation and neuromonitoring with techniques including somatosensory evoked potentials, brainstem auditory evoked responses, and electroencephalography, are always helpful. Given the location of the tumor, vascular proximal control measures need to be planned, either cervical carotid with prep of the cervical area, petrous carotid through the middle fossa, or clinoid carotid. Occasionally, there is the need to perform temporary clipping of some feeding tumor vessels, and neuromonitoring provides feedback on physiological stability during these episodes. The type of craniotomy, such as pterional, modified pterional, orbitocranial, or cranio-orbito-zygomatic, can be selected based on the patient's unique anatomy. Ideally, transzygomatic or cranio-orbito-zygomatic approaches can be used for tumors with significant superior extension, as in the case example presented. In addition, techniques such as pretemporal dissection and extradural clinoidectomy are recommended to partially devascularize and remove the origin of the tumor with early optic canal decompression, which can subsequently be released intradurally after opening the falciform ligament. After meticulous extradural dissection and careful attention to hemostasis, the initial intradural approach aims to explore the anatomical distortion caused by the tumor and to identify normal anatomy. This involves searching for optic nerves, oculomotor nerves, internal carotid arteries, and anterior and middle cerebral arteries. This tumor was wrapped around the right internal carotid artery and right middle cerebral artery. The goal was to divide it into sectors, starting with the lateral component in the middle fossa and around the middle cerebral artery. Central debulking was performed using high microsurgical magnification and an ultrasonic aspirator. Releasing a tumor from vascular structures such as the middle cerebral artery and towards the carotid bifurcation can be performed using constant micro-Doppler and neuronavigation to map the vascular trajectory. Once an arterial trunk is found, it can be followed proximally to remove the majority of the tumor. In cases where a tumor is calcified around the supraclinoid carotid artery or middle cerebral artery, it may be necessary to leave a cuff of the tumor to avoid causing unnecessary injury. Through the middle fossa approach, the sector adjacent to the crural and ambiens cisterns can be carefully resected while dissecting the posterior communicant artery and perforating vessels. If a tumor extends significantly in the inferior direction to the cerebellopontine angle, an anterior petrosectomy may be necessary. However, it was not required in this case. The anterior segment of the tumor is gradually dissected, following the ipsilateral and contralateral A1 segments of the anterior cerebral artery. After performing devascularization, central and superior debulking of the mass was continued, although given the hard consistency of the tumor and its capsule, it was decided to perform a staged frontal craniotomy. This procedure was conducted through a small right frontal coronal craniotomy, with a small transcortical approach used to reach the right lateral ventricle. Central debulking was then performed, followed by medial dissection of the capsule through the arachnoid plane from the anterior cerebral artery A2 and A3 segments and lateral dissection from the superior trunk of the middle cerebral artery. Finally, the dissection was carefully performed from the superior aspects of optic nerves and chiasm.
Conclusions
Giant tumors located in the skull base pose significant challenges due to their size, location, and proximity to critical neurovascular structures. Despite these challenges, current advances in surgical techniques and new technology have made it possible to safely remove giant skull base tumors. Neurosurgeons may employ a combination of open and minimally invasive approaches, such as endoscopic techniques, microvascular dissection, and microanastomosis, if necessary. The success of skull base surgery for giant tumors depends on several factors, including the tumor's location and size, the patient's overall health, the appropriate selection of treatment, and the expertise of the surgical team. Surgery for these patients is preferably performed at highly specialized centers with a multidisciplinary approach in order to provide a higher chance of success and long-term survival. Institutional Review Board Statement: Ethical approval is not required for retrospective case report studies without identifiable information in accordance with Loma Linda University Institutional Review Board guidelines.
Informed Consent Statement: Patient's consent is not required for retrospective case report studies without identifiable information in accordance with Loma Linda University Institutional Review Board guidelines.
Data Availability Statement: Data sharing not applicable. No new data were created or analyzed in this study. Data sharing is not applicable to this article. | 2023-06-07T16:06:02.998Z | 2023-06-01T00:00:00.000 | {
"year": 2023,
"sha1": "e2a2d26c80563cccfb0725014437251d12a5c591",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e2a2d26c80563cccfb0725014437251d12a5c591",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
215751311 | pes2o/s2orc | v3-fos-license | Consolidation treatment of durvalumab after chemoradiation in real-world patients with stage III unresectable non-small cell lung cancer.
Background Treatment for stage III non‐small cell lung cancer (NSCLC) of unresectable disease mainly involves concurrent chemoradiation (CRT). Post‐CRT consolidation treatment with durvalumab is a major therapeutic advance that provides survival benefit in this group of patients. However, the performance of this treatment strategy remains to be studied in a real‐world setting. Methods A total of 31 patients who had disease control post‐CRT were included in the durvalumab early access program (EAP) as an intent‐to‐treat cohort and retrospectively reviewed for post‐CRT progression‐free survival (PFS) and time to metastatic disease or death (TMDD). The neutrophil‐to‐lymphocyte ratio (NLR) at the initiation of durvalumab was analyzed in 29 patients. Results The median time from the completion of concurrent CRT to the initiation of durvalumb was 2.8 months. The objective response was 25.8% and the 12 month PFS and TMDD‐free rate were 56.4% and 66.9%, respectively. The low NLR patients showed a significantly longer post‐CRT PFS (not reach vs. 12.0 months [95% CI: 5.5–not estimable]; P = 0.040; the hazard ratio for disease progression or death, 0.23 [95% CI: 0.05–1.00]; P = 0.048) and the 12 month post‐CRT PFS rate (82.5 vs. 42.6%). The post‐CRT TMDD (not reach vs. 12.6 months, [95% CI: 10.8–not estimable]; P = 0.010; the hazard ratio for distant metastasis or death, 0.11 [95% CI: 0.01–0.88]; P = 0.037) and 12 month post‐CRT TMDD‐free rate (90.9 vs. 57.1%) were also significantly higher in the low NLR patients. Conclusions Durvalumab consolidation treatment in real‐world patients showed substantial efficacy and the correlation with the NLR level warrants further investigation.
Introduction
Stage III NSCLC represents a heterogeneous group of disease entities that are potentially curable and are usually dealt with multimodality treatments involving radiotherapy, chemotherapy, and surgical resection. 1,2 For patients with unresectable stage III disease, definitive chemoradiation delivered either concomitantly or sequentially has long been the standard of care whereas the survival rate beyond five years remains dismal at around 15%-30%. [3][4][5] The poor long-term survival for unresectable stage III NSCLC patients, as a result of the subsequent progression and metastasis of the residual disease following definitive chemoradiation, has been a major challenge that demands an effective consolidation treatment. 6 Previous trials which studied the role of consolidation chemotherapy have mainly yielded disappointing results. [7][8][9] Ahn et al. investigated a combination of docetaxel and cisplatin in a randomized phase III trial where it showed that the chemotherapy group using this combination was not superior to the control group in which the patients only received best supportive care after chemoradiation. 7 The other phase III trial of similar study design, while applying a different chemotherapeutic combination, viorelbine plus cisplatin, also yielded a similar outcome. 8 The lack of clinical benefit of consolidation chemotherapy was also noted in a meta-analysis involving more than 3000 unresectable stage III NSCLC patients. 10 One of the major reasons that patients failed to benefit from the consolidation chemotherapy could be blamed on the tolerance to this approach in the wake of definitive chemoradiation. In the earlier phase III study, a significant portion of the intent-to-treat population randomized to the consolidation chemotherapeutic arm was unable to initiate the treatment; moreover, nearly 40% of patients who initiated the treatment failed to follow the defined treatment protocol. 7 In addition, one study that applied etoposide and cisplatin-based chemoradiation noted that the subsequent consolidation chemotherapy was even associated with an increased treatment-related infection and death. 9 Recently, the strategy to apply immune checkpoint inhibitors as a consolidation treatment after chemoradiation has demonstrated promising results. 11,12 Previous studies have reported that chemoradiation can give rise to a number of immune reactions crucial to tumor containment. These include increased type I interferon and major histocompatibility complex (MHC) class I expression, as well as enhanced priming capacity of tumor-infiltrating dendritic cells [13][14][15] ; all of which may contribute to the increased tumor infiltration of the effector CD8 T cells. [16][17][18] Given that, the subsequent administration of the programmed cell death 1 (PD-1) or PD-1 ligand (PD-L1) inhibitors, acting to mitigate the PD-1/ PD-L1-mediated immunosuppression, sustains the effector immunity established post-chemoradiation around the tumor microenvironment and thereby contain the residual disease through an operational immune surveillance.
In this regard, the PD-L1 inhibitor durvalumab given in the wake of concurrent chemoradiation represented a major leap toward the consolidation strategies. In the phase III PACIFIC study, 11,12 durvalumab administered at six weeks post chemoradiation, compared to the placebo, showed a significantly longer PFS, time to distant metastasis and overall survival (OS). Another phase II study with a similar strategy applied the regimen of PD-1 inhibitor pembrolizumab and has also reported a longer time to distant metastasis compared to the historical controls. 19 Nevertheless, earlier studies have shown that radiotherapy may give rise to systemic lymphopenia 20 and whether the level of peripheral white blood cells is associated with the efficacy of consolidation treatment using checkpoint inhibitors remains unclear. On the other hand, in a real world setting, the compliance and tolerance of consolidation treatment is often limited by the toxicities directly related to the definitive chemoradiation given ahead. As such, whether durvalumab consolidation in a real-world setting has an equivalent performance as previous trials requires further investigation.
In the present study, we analyzed a group of stage III unresectable NSCLC patients who had disease control after concurrent chemoradiation and intended to receive consolidation treatment using durvalumab. We report herein the preliminary results of durvalumab consolidation in this group of intent-to-treat population.
Study patients
Between January 2018 and November 2018, 33 consecutive patients (Fig 1) with histologically documented locally advanced unresectable stage III NSCLC based on chest computed tomography (CT) scan, magnetic resonance imaging, positron emission tomography (PET) and dedicated multidisciplinary assessment were retrospectively reviewed. Patients received concurrent chemoradiation therapy (CRT) with the protocol of radiotherapy 66-70 Gy in 32-35 fractions and chemotherapy using weekly docetaxel (20 mg/kg) or vinorelbine (15 mg/kg) plus cisplatin (20 mg/kg) for five weeks. A total of 31 (93.9%, Fig 1) who had disease control after concurrent CRT were entered into the durvalumab early access program (EAP), receiving consolidation treatment using durvalumab 10 mg/kg every two weeks, and were included in the overall efficacy analysis as the intent-to-treat cohort. Among these 31 patients, two patients underwent disease progression before the initiation of durvalumab and did not adhere to the treatment. PD-L1 assessment was studied with a Dako PharmaDx 22C3 immunohistochemistry assay, and the tumor proportion score was calculated and reported as previously described. 21 The study took advantage of the Chang Gung Research Database and was approved by the Ethics Committee of Chang Gung Memorial Hospital. The study was performed in accordance with the ethical standards of the 1964 Declaration of Helsinki. Written inform consents were provided by all study participants.
Outcome assessment
After the completion of concurrent CRT, a CT scan was performed at six weeks and taken as the baseline image. All patients were confirmed with nonprogressive disease at the baseline assessment and received subsequent image studies every eight weeks. The overall post-CRT PFS, calculated between the date of the radiological confirmation of disease control post-CRT and the date of radiologically documented progression to durvalumab consolidation or death, was analyzed according to the intent-to-treat principle. A specific pattern of progression, the post-CRT TMDD, was also analyzed according to the same principle. The treatment response, defined as complete response (CR), partial response (PR), stable disease (SD), and progressive disease (PD), was evaluated according to the Response Evaluation Criteria in Solid Tumors (RECIST) version 1.1. The toxicities of durvalumab were regularly assessed and recorded during the treatment course by the attending physician and the toxicity was graded according to the National Cancer Institute Common Toxicity Criteria, version 5.0.
Statistical analysis
The Mann-Whitney test was used to determine the statistical significance between two groups of continuous variables, and the Fisher's exact test was used for categorical variables. The Kaplan-Meier survival curve was analyzed by the R package survival. The Cutoff Finder, an R language-based web interface, was used to determine the cut points of the continuous variable. 22 All reported P-values were two sided, and a P-value < 0.05 was considered statistically significant. All the data were also analyzed by SPSS v.25 (SPSS Corp., Chicago, IL, USA).
Baseline patient characteristics
Among the 31 intent-to-treat patients ( Disease control postconcurrent CRT: Overall outcome analysis Figure 1 Flow chart of the study population in which the durvalumab intent-to-treat cohort and the on-treatment patients received the major analysis. NLR, neutrophil-to-lymphocyte ratio.
Outcome of post-CRT tumor control with durvalumab consolidation
The outcome of post-CRT tumor control applying durvalumab consolidation, in terms of the post-CRT PFS and TMDD, were analyzed for the durvalumab intent-to-treat cohort. At the time of analysis, Kaplan-Meier curve showed that the median post-CRT PFS and TMDD were both not reached whereas the 12 month PFS and TMDD-free rate were 56.4% (Fig 2a) and 66.9% (Fig 2b), respectively. The objective responses of the intent-to-treat cohort, in terms of CR, PR, SD, and PD, were 0, 25.8%, 54.8%, and 12.9%, respectively and two (6.5%) patients were not assessed for the response due to not receiving durvalumab treatment (Table 2).
Association between neutrophil-tolymphocyte ratio and post-CRT PFS and TMDD To assess the association between the peripheral blood neutrophil-to-lymphocyte ratio (NLR) and the post-CRT PFS and TMDD on consolidation treatment, the patients on-treatment with durvalumab (Fig 1) were divided based on the level of NLR, herein the NLR at 3.8, as determined by the Cutoff Finder. Most of the baseline clinical characteristics including age, sex, ECOG PS, histology, EGFR and ALK mutation status, PD-L1 tumor proportion score, CRT protocol and timing of durvalumab initiation were similar between the high and low NLR groups ( Fig 3a) and the 12-month post-CRT PFS rate was higher (82.5 vs. 42.6% ; Fig 3a). In terms of the post-CRT TMDD: low NLR group also showed a significantly longer median post-CRT TMDD (not reach vs. 12 Fig 3b) and a higher 12 month post-CRT TMDD-free rate (90.9 vs. 57.1% ; Fig 3b) Fig 4d) were only numerically lower.
Adverse events profile of durvalumab treatment
The most commonly noted all grade adverse events in patients receiving post-CRT durvalumab consolidation included skin rash (seven patients; 24.1%), pruritus (five patients; 17.2%), pneumonitis (five patients; 17.2%), elevated AST/ALT (three patients; 10.3%), diarrhea (three patients; 10.3%) and cough (three patients; 10.3%; Table 4). Overall serious adverse events of grade 3-5 were noted in four (13.8%) patients in which pneumonitis were noted in two (6.9%) patients, skin rash in one (3.4%) patient and elevated AST/ALT in one (3.4%) patient. Logistic regression was performed (Table 5) to analyze the clinical factors predictive of pneumonitis, including age, ECOG PS, smoking status, chemotherapeutic agents used, PD-L1 status and NLR. However, none of them were found to be associated with the development of pneumonitis in the analysis.
Discussion
This work reported the preliminary results of the post-CRT PFS and TMDD with durvalumab consolidation treatment in an intent-to-treat cohort of stage III unresectable NSCLC patients who participated the durvalumab EAP in a real-world setting. We noted a 56.4% post-CRT 12 month PFS rate and a 66.9% post-CRT 12 month TMDD-free rate in this real-world intent-totreat cohort. When the NLR level, but not those of the ANC or ALC, was taken into account at the administration of durvalumab, we noted a significantly longer post-CRT PFS and TMDD for the low NLR patients in the durvalumab on-treatment cohort. The all grade adverse events were noted in 72.4% of the on-treatment patients in which pneumonitis was found in 17.2%. However, clinical factors did not show predictive effect to the development of pneumonitis in this analysis. The consolidation treatment for the stage III unresectable NSCLC patients, usually given through chemotherapeutic agents, has been a controversial practice mainly due to the tolerability secondary to the side effects after the completion of concurrent CRT. In this analysis, the median time between the completion of concurrent CRT and the initiation of durvalumab consolidation was 2.8 months. This time window, somewhat longer than that in the reference PACIFIC study, can be partly explained by the patients' post-CRT tolerability as well as the physician's precaution toward consolidation treatment immediately after the completion of CRT in a real-world setting. Nevertheless, as patients may experience disease progression in this window, the influence was taken into account by applying the intent-to-treat definition to the EAP cohort. With this definition, two (6.5%) patients who had disease progression in the window and had not received durvalumab were included in the analysis to avoid the overestimation of the effect of durvalumab treatment. Following this principle, we demonstrated the post-CRT 12-month PFS rate and the TMDD-free rate in this real-world cohort were similar to those in the PACIFIC trial. On the other hand, the 25.8% response rate of the durvalumab consolidation in this analysis, similar to that 30.0% in the PACIFIC trial, was also noted.
In addition, the present study showed the association between the level of NLR and the post-CRT tumor control with durvalumab treatment. Previous studies of advanced melanoma patients receiving immune checkpoint inhibitors have shown that peripheral blood makers may play a prognostic as well as a predictive role in their treatment. Giacomo et al., 23 and Delyon et al., 24 reported that a high lymphocyte count, or a change in the count slope at the start of ipilimumab treatment, were correlated with better overall survival (OS) and the treatment response; a finding which was similarly noted in a Japanese melanoma cohort receiving ipilimumab treatment. 25 On the other hand, the significance of the neutrophil count was also highlighted by the work of Capone et al. in which the high neutrophilto-lymphocyte ratio was correlated with a worse OS and PFS. 26 Recently, in advanced NSCLC patients treated with PD-1/PD-L1 inhibitors, the derived neutrophil-tolymphocyte ratio was shown to be associated with the OS and PFS 19 whereas this association was not observed in patients who received chemotherapy. In this preliminary report involving locally advanced NSCLC patients who received post-CRT durvalumab treatment, the correlation between the baseline NLR and the post-CRT disease progression and distant metastasis was first demonstrated. The level of the individual type of white blood cells, in terms of the ANC and the ALC, showed less significant clinical implication in this analysis. In this real-world EAP cohort, we noted a 17.2% of patients had treatment-related pneumonitis in which 6.9% required a discontinuation or interruption of durvalumab.
A number of clinical factors, including the histology of squamous cell carcinoma and poorer ECOG PS at 1, were noted to be associated with the development of pneumonits in the PACIFIC study whereas no specific clinical factors predictive of the development of pneumonits were identified in this analysis. In addition, 6.9% of the patients underwent treatment-related elevation of amylase and lipase, which was a finding not commonly reported elsewhere. However, this treatment-related abnormality was usually mild and resolved spontaneously.
The major limitation of this study was the small sample size. However, the difference between the high and low NLR groups in terms of the post-CRT PFS and TMDD with durvalumab treatment remained statistically significant at this sample size, suggesting the significance of NLR in relation to the efficacy of durvalumab treatment and thereby warrants further investigation in the future. In addition, the retrospective nature of this analysis may have underreported the toxicity profiles as well as the grading, particularly those that were non-severe and should therefore be interpreted with caution. In conclusion, durvalumab consolidation showed substantial efficacy in real-world locally advanced unresectable NSCLC patients who underwent concurrent CRT, and the level of NLR at the initiation of durvalumab was associated with the treatment efficacy. | 2020-04-14T13:03:23.470Z | 2020-04-13T00:00:00.000 | {
"year": 2020,
"sha1": "14681327b35c4faa7d36ec12fa5498eea920ecba",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1759-7714.13426",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c30ebca693fb692fef9c908786b91ce916abb95e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
264291224 | pes2o/s2orc | v3-fos-license | Exploring medical students’ perceptions and understanding of the health impacts of climate change: a qualitative content analysis
Background Climate change has been identified as the greatest threat to global health in the twenty-first century, with its unfavorable health consequences being among its impacts on humans. Exploring the perspectives and understanding of healthcare professionals and service providers concerning climate change becomes imperative. The aim of this study is to investigate the perceptions and understanding of final-year medical students regarding the health impacts of climate change on individuals and the healthcare system using a qualitative content analysis. Methods This study employed a qualitative content analysis approach. Face-to-face interviews were conducted with the aid of an interview guide to explore the students' awareness, understanding, and attitudes towards the impacts of climate change on public health and the healthcare system. The collected interview data were subsequently organized into codes, categories, and subcategories based on the students' perspectives and attitudes towards climate change. Results Fifteen medical intern students were interviewed for this study, and the qualitative findings were categorized into 3 categories, 23 subcategories, and 229 codes. The study's findings revealed various health impacts of climate change, which were classified into three main categories, including environmental effects with 8 subcategories, socio-economic effects with 8 subcategories, and health effects with 7 subcategories. The study's findings revealed medical students' perceptions of various health impacts of climate change and These findings suggest that medical student understand that climate change has significant impacts on individuals' health and society, mainly through environmental degradation, increased risks, and climate-related disasters, which ultimately lead to adverse health outcomes. Conclusions The perspectives of medical students in this study indicate that climate change may not have a direct and immediate impact on the health of individuals and communities. However, it can significantly influence their health and socio-economic well-being by exacerbating or causing environmental problems, increasing the risk of weather-related events and natural disasters, ultimately leading to adverse health outcomes. While the medical students' perspectives on the health impacts of climate change are indeed broad, incorporating scientific knowledge about this topic into the medical curriculum and educating students on how to deal with patients affected by these consequences can have a significant impact on health management. This proactive approach, despite the students' already comprehensive understanding, can enhance their preparedness to address the health effects of climate change and contribute to strengthening the healthcare system's resilience in the face of climate-related challenges.
Background
The assessment of the negative and widespread impacts of climate change on human health and the prediction of the continuation of these consequences in the future is such that the World Health Organization has declared climate change as the "biggest threat to global health in the twenty-first century [1].These consequences include heatwaves, cold snaps, malnutrition, and the exacerbation of cardiovascular and respiratory problems [2].In addition, the rise of vector-borne diseases such as dengue, Zika, Chikungunya, and malaria is being increased by climate change.The direct damage caused by severe climate change events and the increased risk of unsafe water and food in areas affected by climate change are significant health impacts.Marginalized communities, children, the elderly, and those with underlying health conditions are among the vulnerable populations that are disproportionately impacted by the consequences of climate change [3].
In this regard, it is crucial to prioritize monitoring and controlling climate-related health issues like heatrelated illnesses, vector-borne diseases, and cardiovascular conditions.Preparedness for unexpected events, such as floods and storms linked to climate change, is vital.Healthcare facilities should also implement preventive measures and adapt to changing climate conditions [4].Research in the United States by Maibach EW et al. highlights that physicians are trusted sources of information regarding climate change's health impacts [5].Another study by Sarfaty et al. indicates that most physicians recognize the connection between climate change and health and advocate for interventions to mitigate its effects [6].
Future physicians have the potential to influence patients positively in adopting healthier behaviors to reduce climate change-related health risks [7].Educating physicians systematically has been shown to increase patient awareness of climate change and reduce its health impacts [8].Health literacy, defined by the European Health Literacy Consortium in 2012, empowers individuals to make informed decisions related to disease prevention and health promotion, ultimately improving their quality of life [9].To enhance society's and the healthcare system's health literacy regarding climate change's health impacts, revising medical and paramedical school curricula and fostering better attitudes among students during their education are necessary [10].
While raising awareness about climate change's global health impacts and individual responsibility is crucial, many physicians may not fully comprehend their role in addressing it [10].Climate change significantly affects human health, posing growing dangers.Climate change experts argue that physicians, due to their trusted status, can effectively educate the public about the health impact of climate change.Consequently, it is vital to integrate climate change and its health consequences into medical education, empowering future physicians to address this issue and mitigate its effects.Such an approach enables physicians to play a pivotal role in promoting public health and safeguarding vulnerable populations from the adverse impacts of climate change [10].
Despite some studies that have evaluated the understanding of physicians and the general public regarding the health effects of climate change in relation to patient care and the role of practicing physicians [11,12], there has been limited research [10,11] examining the perspectives of medical students concerning the health effects of climate change and the potential contributions of medical students in mitigating these effects.Recognizing this gap, the primary objective of this study is to elucidate the attitudes and perceptions of medical students, who represent the future stewards and workforce of the healthcare system, regarding the health implications of climate change within the framework and socio-cultural context of a developing nation.
Medical students, as future healthcare professionals, must acquire the necessary knowledge and skills to address the health impacts of climate change.Through education and training programs, they can become effective advocates for climate and planetary health, and play an important role in increasing public awareness about the health impacts of climate change.By doing so, they can contribute to the development of effective public health policies and interventions to mitigate the negative impacts of climate change on human health.
Participants and data collection method
For this study, face-to-face interviews were conducted with final-year medical students at Kurdistan University of Medical Sciences between May and October 2022 to collect data.Kurdistan province is located in western Iran.This province has a population of nearly 1.5 million people [13,14].Kurdistan University of Medical Sciences is located in the city of Sanandaj, which is in the center of this province (Fig. 1).
The study was carried out using a purposive sampling method.To this end, the researcher (PP) collected data using 15 semi-structured and non-structured face-toface interviews.The interviews began with three nonstructured interviews to determine the interview axes, followed by 12 semi-structured interviews.Before commencing the interviews, an informed consent form was presented to the students, and the interviews began after obtaining written consent.For data collection, an interview guide with open-ended questions focused on the study objectives was used.During the interviews, the researcher carefully listened to the participants' responses and asked probing questions to maintain the conversation in line with the research question.At the same time, the researcher tried to use fewer leading questions to avoid influencing the interview direction.In some cases where additional questions were needed, telephone calls were made and recorded, based on the participants' preferences.After the first interview was reviewed by one of the supervisors (AH), any issues with the interview management approach were identified and resolved in subsequent interviews.The main focus of the conversations in the interviews was the experiences and attitudes of medical students towards the health challenges of climate change.The study employed semi-open questions to investigate medical students' understanding and perspectives on climate change and its impact on health.Some of the semi-open questions used in the study were as follows: 1. What does climate change mean? 2. How can climate change affect the health of individuals and society? 3. How does climate change affect the health system? 4. How can these effects be reduced? 5. What are the solutions to increase the preparedness of the health care system for climate change?
Data analysis
Due to the complex nature of understanding perspectives and attitudes, researchers have favored qualitative content analysis as the preferred method for extracting and categorizing students' viewpoints [15].This method is particularly valuable when aiming to describe a phenomenon within a conceptual framework and organize the extracted data systematically.Subsequently, it allows for a conceptual examination of the phenomenon of interest [14].In this study, the Graneheim and Lundman approach [16] was used for data analysis.The text was transcribed by PP.To make sense of the data as a whole, the text of each interview was transcribed word for word immediately after the interview, and each text was listened to and read several times by PP to immerse the researcher in the data.After identifying the meaning units and phrases in the text, condensed meaning units were extracted through note-taking by AH.Then, the participants' experiences were identified as concepts under the precise supervision of a qualitative research supervisors (AH and AY) and with the consultation of another researcher (YZ).In the next step, the initial codes were placed in subcategories based on their similarities and differences by AY and AH.Given that in the primary model, one or more subcategories were defined for each primary category, some of the initial codes were placed in the existing subcategories (AH and AY).To ensure accurate comprehension of the concepts and prevent superficial and automated coding, the coding and categorization procedures were manually conducted using paper and pencil (PP and AH).In the preliminary coding phase, the sentences of the participants were utilized, and the codes and condensed meaning units were identified.Subsequently, the codes were classified into categories and subcategories based on their similarities and differences (AY and AH).
To ensure the validity of this study and increase the trustworthiness of, four valid criteria, including credibility, transferability, dependability, and confirmability have been achieved.
In the present study, the researcher ensured credibility by being involved extensively in the study, from the beginning of the design phase to the data collection, analysis, and writing stages.In addition, the results of data analysis were shared with other qualitative researchers in this study to obtain their complementary and critical perspectives, ensuring coordination with the research team and enhancing dependability.
In this study, transferability was ensured by selecting appropriate participants who had the most interest about the research topic.Data collection and analysis were done simultaneously, and coherence between research questions and methods was ensured.Additionally, the study results were compared to those of other studies (in the discussion section) to enhance the transferability of the research.The researcher's interest in the phenomenon under study, their longterm involvement with the phenomenon, documenting interview transcripts, and efforts to obtain complementary and critical feedback from research participants, research group members, and other qualitative researchers were factors that ensured confirmability in this study.
Results
In this study, 32 final-year medical students in the internship phase were invited to participate in the interviews, of whom 15 individuals (47%) participated in the interviews.Among the participants, 13 individuals were aged between 24 and 29 years, and 2 individuals were aged between 30 and 35 years, with a mean age of 28.8 years.The duration of the interviews ranged from 20 to 50 min, with an average of 35 min, depending on the richness of the participants' information.
Among the interviews, 229 codes were extracted and categorized into three categories (environmental, socioeconomic, and health effects) and 23 subcategories.The category of environmental effects had 8 subcategories, including the effects of climate and global warming, environmental damages, effects on ecosystems, air pollution, water pollution, effects on the aquatic chain, effects on agriculture and the food chain, and weather-related hazards.The category of socio-economic effects had 8 subcategories, including population migration, education, employment, security, urban services and resources, infrastructure, vulnerable groups, and economic issues.The category of health effects had 7 subcategories, including health services, personal health, malnutrition, mental disorders, infectious diseases, vector-borne diseases, non-communicable diseases, and injuries.The extracted codes were categorized into different categories and subcategories, which are presented in Table 1.Some of the most important results related to these categories are discussed below.
Environmental effects
This category included 74 codes that were classified into 8 subcategories according to Table 1.The categorized codes in this group included environmental effects and effects of climate change that have significant impacts on the Earth's climate and various regions, causing global warming, extreme weather conditions in different areas, and destruction of the environment and ecosystems.The environmental effects of climate change lead to air and water pollution, negative impacts on water and food supply chains and agriculture, and can affect the health of individuals and society.Some of the most important findings related to these subcategories are presented in the following sections.
Global warming impacts
One of the most significant impacts of climate change is the alteration of climate patterns, which can lead to shifts in weather patterns and atmospheric changes.Climate change also results in a rise in Earth's temperature and global warming.This is caused by changes in Naturally, such events lead to an increase in fractures and traumatic injuries."
Air pollution
Air pollution is a leading cause of death for millions of people worldwide each year [17,18].In some cases, droughts can result in the destruction of forests and wetlands, leading to desertification.As soil erosion intensifies over time, the frequency and severity of dust and sandstorms increase, causing unhealthy air for people to breathe.Interviewee 5 discussed the impact of air pollution and dust storms in Khuzestan province, Iran, saying, "The sandstorms from Iraq cause respiratory problems such as COPD and other respiratory issues for people.These problems also contribute to an increase in cancer rates among the people of Khuzestan, making the cancer rate in this province twice as high as in other regions of the country."
Water pollution
Climate change can result in increased heavy rainfall and floods, drought, and higher water temperatures, ultimately leading to changes in the quality of drinking water.These changes create new conditions for the growth of bacteria and viruses, leading to various human diseases when exposed to contaminated water.Additionally, water scarcity can also affect human health, particularly during drought conditions.Floods can also cause water pollution and limit access to drinkable water by infiltrating groundwater sources or contaminating freshwater purification systems.Interviewee 11 discussed the impact of droughts and floods on water scarcity and pollution, saying, "Droughts result in water shortages, which have implications for public health.When there is a lack of water, people are forced to quench their thirst with unsafe water, resulting in health problems.Furthermore,
Population displacement and migration
The health impact of displacement and migration due to climate change is often exacerbated when combined with other factors, such as chronic poverty and marginalization.Climate change will lead to increased urbanization as a result of increased flooding and drought, and the destruction of agricultural land.In addition, the destruction of homes and shelters due to climate change-induced disasters can lead to forced displacement and migration of individuals, which can ultimately affect their mental and physical health.Interviewee number 8 stated regarding this issue, "Floods can cause damage and leave people under rubble, leading to casualties.The destruction of homes forces people to live in other places, such as refugee camps, where people are in closer contact with each other and have fewer sanitary facilities.They have to use public facilities which can increase the transmission of infections among people."
Education
Climate change threatens children's rights to education at a global level [18].Currently, nearly half of all children reside in countries that are highly susceptible to the effects of climate change, and the majority of these children are also exposed to vulnerable conditions.Climate events often result in damage to school infrastructure or even their destruction, which can ultimately cause children to permanently drop out of school.Also, due to forced migration, the opportunity to get an education is limited for some children.Consequently, climate change can ultimately lead to a decline in the overall literacy rate of society.Interviewee number 3 stated regarding this issue, "For example, in crisis-stricken areas, access to food is reduced, and people lose everything they have and are unable to meet their needs.Thus, they are forced to work to meet their own and their children's needs, and they may have to sacrifice their education.In addition, because they cannot meet their basic needs, they may not be able to meet many of their health needs.This can lead to a significant decrease in the health literacy of that region, and when the health literacy of that region decreases, diseases related to that region may increase."
Infrastructure
Disasters or extreme weather events, can have destructive effects infrastructure, resulting in various consequences.The destructive effects of climate change on infrastructure can result in significant human and financial losses.Interviewee number 11 stated, "Personal hygiene is affected even if proper health services are not available, especially in this area, where children and women are mostly affected.We know that they have physiological needs regardless of their circumstances.For example, women have their menstrual cycle and need to maintain personal hygiene.When they cannot follow these practices, they are more likely to develop certain female-related problems or diseases.Similarly, children who do not have access to proper sanitation facilities, such as bathrooms or toilets, are constantly exposed to unclean environments outside their homes, which can lead to various diseases."
Health effects
This category, which was extracted from the study and consisted of 101 codes, is divided into seven subcategories, including health services, personal hygiene, malnutrition, mental disorders, infectious diseases, vector-borne diseases, non-communicable diseases, and injuries.The findings related to these subcategories are presented below.
Health services
Climate change can lead to changes in temperature, humidity, and seasonality patterns, which can change the pattern of diseases and lead to an increase in disease burden, resulting in more visits to health centers and hospitals.Climate-related disruptions to supply chains, aid delivery, and patient transport can increase the workload and fatigue of healthcare workers, as well as an increase in casualties and patients requiring medical attention.Also, the occurrence of these disasters and climate events can lead to an increase in deaths, seizures, accidents, trauma, fractures, and drownings.Participant 3 mentions that, "for example, an increase in diseases such as malaria, or diseases for which we have vaccines, may cause a community to lose access to vaccination, leading to an increase in disease burden and pressure on hospitals".Also, participant number 10 says, "We witnessed the recent snowstorm and saw that most people who forced themselves to travel ended up in the hospital with issues such as falling, trauma, and hypothermia.In many cases, they were also faced with injuries from slipping or car accidents."
Personal hygiene
Personal hygiene refers to the daily habits individuals practice for their own health, such as bathing, brushing teeth, washing hands, washing clothes, and cleaning dishes.Climate change can lead to drought and water scarcity, making it difficult to access clean and hygienic water for daily needs.On the other hand, the loss of housing and pollution of water sources due to the environmental impacts of climate change can lead to a lack of clean water for personal hygiene or even drinking purposes.Participant 10 explains, "The loss of water resources can result in irregular bathing habits and compromised oral hygiene.Additionally, floods can disrupt water and sewage systems, leading to an increase in infectious diseases that were previously uncommon among the population."
Malnutrition
In this study, malnutrition refers to individuals' inability to access sufficient water and food to meet their daily needs due to the effects of climate change.With water insecurity and the prevalence of climate change-related diseases, storms create a perfect storm for unprecedented global nutrition crises.Malnutrition leads to a decrease in vitamins and minerals in the body, which can result in stunted growth in children, kwashiorkor, anemia, scurvy, and rickets.
Participant 12 explains, "The lack of access to food and the increase in food prices due to climate change is causing certain communities to have less access to food.This leads to a smaller dietary intake, malnutrition, and health problems that put their well-being at risk".
Mental disorders
Climate change and global warming can also lead to an increase in forced migration, which can lead to stress, depression, and anxiety disorders.The rise in climate change-related disasters can result in mental health issues such as post-traumatic stress disorder, adjustment disorder, and depression.Additionally, it can contribute to an increase in physical illnesses that are often accompanied by psychological distress.Heat waves and high temperatures can also cause mood disorders, anxiety disorders, cognitive decline, and social anxiety disorders.Participant 14 states, "The increasing heat and destruction of homes can lead to psychological damage, such as obsessive-compulsive disorders, anxiety disorders, and post-traumatic stress disorder.It can also increase the risk of suicide".
Communicable disease
The increase in temperature and the decrease in access to clean water, as well as the increase in the population of insects such as mosquitoes, ticks, and cockroaches, can lead to an increase in waterborne infections such as cholera, amoebiasis, and gastroenteritis.Participant number 3 says, "Due to the shortage of water or lack of access to purified water, some people are forced to use stagnant water.Stagnant water contains various microorganisms such as cholera and amoebiasis, which can cause severe diarrhea, vomiting, and other diseases related to those infections."
Vector-borne diseases
The term "vector-borne diseases" in this study refers to the diseases that are transmitted through the increase in the population of vectors due to changes in seasonal patterns, temperature, and humidity.Increased rainfall can increase the amount of stagnant water and create more breeding grounds for many vectors.Droughts can also create suitable conditions for vector breeding by forming pools of stagnant water.Participant number 9 says, "Water sources that were previously flowing like springs are now becoming stagnant, which increases the risk of parasitic diseases.The population of mosquitoes, ticks, and other vectors also increases, which in turn increases the risk of certain diseases."
Non-communicable diseases
The term "non-communicable diseases" refers to cardiovascular diseases that are exacerbated by the effects of climate change, including severe heat waves, dust storms, increased pollution, and decreased air quality.Some of these non-communicable diseases, including cardiovascular diseases such as heart attacks, heart failure, arrhythmias, and hypertension, as well as some cancers, respiratory diseases, mental disorders, trauma, and malnutrition, are likely to increase in frequency and severity due to the effects of climate change, especially in individuals with underlying conditions.Participant number 3 states: "Sandstorms can cause new respiratory diseases that the area has not experienced before and exacerbate asthma.Increased pollution can also jeopardize heart health by increasing the incidence of heart attacks, arrhythmias, and heart failure."
Discussion
Climate change impacts health and wellbeing [19].These impacts are extensive, with some recent emergence.Investigating these challenges is crucial for public health adaptation [4].Assessing medical students' knowledge is valuable.A November 2021 survey found that 69% of physicians, 67% of clinical leaders, and 54% of executive managers understand climate change's health impacts [20].This study, from the perspective of medical students, highlights their deep understanding and identifies unique health outcomes from climate change through content analysis (Fig. 2).
Environmental effect
The comprehensive discussions held by the study participants have enriched the evidence base by providing nuanced insights into how climate change impacts health through its influence on the biophysical environment.This study offers a more comprehensive perspective compared to a similar study conducted by Sorgho and colleagues in Germany [21].While Sorgho's study primarily focused on climate hazards and the direct and indirect health effects of climate change, our study encompasses climate hazards as one of several subcategories within environmental impacts.Similarly, Ebi and colleagues [22] concentrated on health impacts of weather-related events and climate change.However, it's important to note that climate change affects health through its influence on the biophysical environment [23].Fundamental health necessities such as clean air, water, food, and shelter are affected [24].
Medical students dedicated have systematically identified a diverse range of health-related consequences attributed to factors, from rising temperatures and droughts to sea-level rise, flooding, wildfires, air quality degradation, and water supply challenges.These multifaceted effects significantly impact human health, leading to increased mortality rates and psychological well-being concerns [22].Notably, extreme heat events (EHEs) are particularly lethal [25], with elevated temperatures contributing to heat-related illnesses and fatalities, accounting for over a third of such deaths in the last 30 years [26].The IPCC predicts a concerning doubling of heat events every six years with a 1.5°C global warming scenario, a rate ten times faster than historical records [25].Vulnerable populations are disproportionately affected [27].
Climate change affects water resources and public health through reduced access to clean water, increased infectious disease risks, and threats to coastal communities and industries due to rising sea levels.Intense precipitation raises the risk of flooding and casualties [28].Medical students in this study have emphasized the health impacts of environmental degradation and climate-induced ecosystem changes, often overlooked.It is conceivable that the attention of medical students to this issue stems from the fact that high environmental degradation levels correlate with lower life expectancies and higher infant mortality rates [29].Prioritizing actions to mitigate climate change's health effects by addressing environmental impacts and safeguarding ecosystems is recommended.These findings underscore the urgency of addressing climate-related health risks comprehensively and emphasize the need for immediate action to protect the well-being of all communities.
Socio-economic effect
While medical students are well-informed about the impact of economic and social factors on health, our study extensively explores the effects of climate change on health, especially in these domains.These effects encompass a wide range of aspects, from physical and emotional well-being [30][31][32] to energy poverty due to climate change [32], along with connections to homelessness and housing crises.Our research underlines the comprehensive spectrum of climate change's economic and social impacts on health.Risks and damages associated with climate-related extreme events are influenced by various factors, including exposure, vulnerability, and preparedness [22], Vulnerable populations are particularly susceptible to these socio-economic consequences due to resource and social condition disparities [31].These findings enrich our understanding of how environmental changes affect both health and society, highlighting the necessity of holistic strategies to address these aspects of climate change for improved public health outcomes.Our study, based on the insights of medical students, has meticulously examined climate change's safety implications, including its impact on mental and social well-being, heightened psychological stress, and conflicts, which disproportionately affect marginalized populations [30].We emphasize climate-induced housing instability and homelessness [30], alongside the emergence of environmental refugees due to migration [33].Furthermore, our research highlights climate change's effect on urban services and resources, especially in disadvantaged communities [31], with economic disruptions impacting infrastructure and public health.Understanding these complex socio-economic dynamics is vital for comprehensive strategies to address climate change's broad impacts on health and society.
Health effect
Climate change is recognized as the foremost 21st-century global health threat, adversely impacting essential elements including water, food, agriculture, and health through diverse mechanisms [28].This study stands out due to its comprehensive categorization of these health consequences, reflecting the substantial focus of medical students on this subject.In contrast to a separate study by Salas, Renee N, which assessed physicians and clinical managers' awareness and opinions of climate change's health impacts [20], our study provides insights into climate change's health effects from the perspective of medical students.
Physicians, clinical managers, and leaders acknowledge climate change's impact on healthcare [20].This study highlights medical students' concerns about climate change's effects on healthcare and health services.Rising temperatures and more frequent extreme weather events due to climate change strain health services and patient care [20].Climate change presents challenges, including added stress on healthcare systems during hot seasons [34].Severe heat events may lead to issues like operating room closures due to high temperatures and humidity, affecting healthcare professionals and patients and potentially disrupting healthcare services [22].
Medical students highlighted climate change's impact on healthcare professionals, revealing increased workloads, staff shortages, fatigue, psychological stress, reduced patient focus, diagnostic errors, and lower care quality.They also noted shortages of medical equipment, hospital beds, laboratory kits, rising medication demands, medication delivery disruptions, difficulties in patient transfer and assistance, limited healthcare facility access, and a heightened pandemic risk.Current concern centers on the severity and extent of future climate change impacts.Some healthcare professionals predict a growing trend in climate change's effects on healthcare services [20].With worsening heat-related events predicted due to climate change over the next three decades, healthcare systems must prepare for increased heat-related diseases and reduce related healthcare challenges [25].Unfortunately, health authorities have not adequately prepared for climate hazards, despite opportunities to enhance climate resilience in healthcare systems and facilities [22].
This study focuses on climate change's impact on individual health.For instance, Sarah J. Coates and colleagues have noted deteriorating health conditions and disrupted access to clean water due to climateinduced mass displacement, especially in regions like Africa [27].Such disruptions can increase the incidence of communicable diseases [35], largely due to the vulnerability of water resources to climate stress, which makes access to clean water more challenging and costly [36].Inadequate access to clean water, sanitation, and hygiene contributes to approximately 2.2 million annual deaths from diarrhea [24].
Medical students in this study highlight climate change's role in the emergence and spread of vectorborne diseases like malaria, dengue, and cholera due to alterations in temperature, humidity, and water availability [19,27,37].Maintaining good personal hygiene to prevent water-and food-borne illnesses is also emphasized as a strategy to cope with climate change and its temperature fluctuations [19].
In this study, medical students highlighted health concerns related to climate change, including malnutrition, anemia, vitamin deficiencies, and the potential worsening of global hunger, weakened food systems, reduced nutrition, and threats to food security.Climate-related factors such as rising temperatures, erratic rainfall, and extreme weather events can impact crop yields [38].Increased atmospheric carbon dioxide levels may also reduce the nutritional quality of certain grains and legumes, raising the risk of iron and zinc deficiencies.Climate change can also indirectly impact nutrition by influencing the spread of infectious diseases [39].
Climate change can potentially lead to malnutrition (both undernutrition and obesity) and diet-related noncommunicable diseases such as diabetes and cardiovascular issues, as it adversely affects agriculture, leading to increased food and financial insecurity [39].Furthermore, climate change significantly exacerbates the risk of famine and malnutrition by affecting agriculture, food, and water access [27].This is especially critical in lowincome countries heavily reliant on rain-fed agriculture, potentially causing significant food shortages and shifting mortality from obesity to undernutrition [39,40].Even high-income countries in the Western Pacific region have been affected by the repercussions of climate change on diet-related non-communicable diseases (NCDs) [40].
This study of medical students' views highlights the impact of climate change on non-communicable diseases.Higher temperatures, as noted, increase the risk of mortality from cardiovascular and heat-related diseases, exemplified by the 2003 European heatwave [28].Additionally, climate-induced air pollution, linked to various non-communicable diseases, including respiratory ailments, lung cancer, cardiovascular events, and more, raises concerns [41].Medical students also expressed worries about climate change potentially increasing disabilities, congenital abnormalities, and compromising the immune system [28,41].
Climate change is linked to psychological disorders like suicide, PTSD, depression, and anxiety disorders, similar to Susan Clayton's findings [42].Socio-economic impacts and severe weather events are the primary drivers.This study also notes a decline in hope for life and an increase in obsessive-compulsive and schizophrenic disorders as additional psychological consequences of climate change [42].
While we recognize the valuable insights gained from this study, it is essential to address the study's limitations more comprehensively.The qualitative nature of our research and the small sample size raise valid concerns about the generalizability of our findings.We acknowledge that readers should exercise caution when extrapolating the results of this qualitative study, which was conducted at a single medical school with a small number of participants.We understand the need for a more extensive discussion of these limitations to ensure that readers do not assume unwarranted certainty based on this specific study's outcomes.However, these limitations do not detract from the valuable insights gained from this specific group, confirming their strong awareness of the direct and indirect impacts of climate change on health.It underscores the need for continued education and awareness among future healthcare professionals to address the evolving challenges posed by climate change.
Conclusions
This qualitative study based on medical students' views categorizes climate change consequences into environmental, socio-economic, and health impacts, highlighting their interconnectedness.Many of these consequences can exacerbate health outcomes.Addressing these health implications necessitates comprehensive interventions at all levels.Given the qualitative nature of this study, further research is recommended to assess the significance of newly identified health impacts related to climate change.
Fig. 2
Fig. 2 Direct and Indirect Effects of Climate Change on Health Our figure is based on the concept from the CDC source and has been customized to suit the specific content of this study Adapted from a concept inspired by a CDC figure: https:// www.cdc.gov/ clima teand health/ effec ts/ defau lt.htm
Table 1
Categories, subcategories, and codes for the health effect of climate change Global Warming Impacts Atmospheric changes, Seasonal weather changes, Climate imbalance, Cold waves, Increase in greenhouse gases, Ozone layer depletion, Temperature inversion, Changes in air temperature, Changes in solar radiation, More absorption of sunlight, Increase in solar radiation, Increase in UV rays, Formation of heat islands, Increase the number of hot days, Heat waves Environmental Degradation Submergence of islands, Dust storm, Flooding of rivers, Destruction of forests, Changes in the earth's orbit, Destruction of natural resources, Soil erosion, Increase in salt marshes Ecosystem Effect on living organisms, Reduction of vegetation, Extinction of animal species, Extinction of plant species, Increase of insects, Increase of rodents, Extinction of marine creatures, Destruction of marine ecosystem, Change in native ecosystem Declining water quality, Reduced access to safe water, Reduction in potable water, Increase in the need for safe water, Pollution of drinking water, Decrease Environmental health of camps, Water Supply Chain Reduction of underground water level, Increase in water evaporation rate, Dams reservoir reduction, annual precipitation reduction, Decrease in drinking water, Increase in non-precipitating clouds, Decrease in water absorption, Inconsistency in seasonal precipitation, Melting of glaciers, Rising sea level,Increase in stagnant water, Unusual rains, Excessive use of underground water Agriculture And Food Supply Disturbance in the food chain,loss of livestock, Destruction of farming, Increase in agricultural pests, Food insecurity, Diminished agricultural products quality, Diminished livestock products quality, Decrease in agricultural products, Reduction of livestock products, Inequality in access to food sources, Loss of flowering fruit trees, Salinization of land, Food contamination Climatic Hazard Drought, unusual floods, Dust storm, tsunami, Landslide Socio-economic Migration And Displacement Urbanization, Displacement, Forced migration, Living in camps, Lack of shelter Education Unmet basic needs, Educational failure, Withdrawal from education, Decreasing health literacy Occupation Increase in staying at home, Increase in remote work, Destruction of businesses, Increase in unemployment, Decrease in work efficiency, Decrease in working hours, Increase in days off Security Decrease in social security, Decrease in mental security, Entry of wild animals into the city,War, Increasing interpersonal tension Urban Services And Resource Injustice in serving, Inequality in access to resource, Disruption of urban planning, Increased need for medical services Infrastructure Destruction of houses, Fire, Destruction of health centers, Road closing, Bridges destruction, Destruction of coastal cities, Destruction of treatment plants, Destruction atomic power plants, Destruction of water treatment plants, Destruction of mines, Submergence of cities, Communication infrastructure destruction Vulnerable Groups The effect on vulnerable groups, The effect on large families, The effect on the poor people, Damage on poor countries, Increase in addiction Economic Issue Decrease in income, Increase in medical costs, Increase in personnel costs, Increase in reconstruction costs, Economic poverty, Decrease in purchasing power, Expensive food, Expensive livestock products, Expensive medicines, Financial losses, Lack of budget, Lack of adaptation of poor people temperature, solar radiation, increased absorption of sunlight, heat radiation, and ultraviolet radiation due to ozone depletion.These factors contribute to the formation of heat islands, an increase in hot days, and heat waves.Interviewee 7 and 4 commented on this matter, saying, "A rise in temperature exceeding what our bodies can tolerate can adversely affect our health.It can cause dehydration, disrupt our body's electrical system, and lead to other health issues.""When I worked in the surgery department, an unusual and heavy snowfall occurred in Sanandaj city.
Table 1
(continued)Disruption in aid delivery, Disruption in patient transfer, Diminished treatment quality, Drug shortage, Increase in workload, Decrease in access to health centers, Reducing the focus on patients, Increasing the fatigue of the treatment staff, Increasing the mental pressure of employees, Increasing the number of patients, Pandemics, Increasing diagnostic errors, Increasing the need to drugs, Lack of medical equipment, Lack of hospital beds, Lack of laboratory kits, Lack of health and medical personnel Personal Hygiene Reduction of bathing, Lack of proper sanitary facilities, Reduction of oral and dental hygiene, Decreased hygiene of menstruating women, Reduction of personal hygiene Malnutrition Malnutrition, Increase in global hunger, Weakening of diet, Decrease in nutrition level, Threat to food security, Reduction of mineral, Lack of vitamins, Reduced growth of children, Exophthalmia, Anemia, Scurvy, Rickets Mental Disorders Anxiety disorders, Stress, Depression, Schizophrenia, PTSD, Suicide, Depression, Reduced life expectancy, Reduced interpersonal communication Communicable Disease Increase in common diseases with livestock, Malt fever, Swine flu, Bird flu, Rabies, Hydatid cyst, Colds, Cholera, Gastroenteritis, Tuberculosis, Hepatitis, Tetanus, Amoebas, Increase in fungal diseases, Outbreak of new diseases, The prevalence of diseases subject to eradication Vector Borne Disease Salek, Crimean Congo fever, Malaria, Lyme, Leishmania, plague, Rabbit fever effects, as they can contaminate the city's water treatment systems and lead to water pollution.Clean water supplies are affected, and the water's cleanliness is compromised." Non-Communicable Disease Cardiac diseases: Increased heart attack, Increased heart failure, Increased arrhythmia, Increased blood pressure Non-infectious skin diseases: Skin cancers, Sunburns, Increased lentigo, Increased skin sensitivity Lung diseases: Asthma, COPD, Lung cancer, Bronchiolitis Neonatal diseases and abnormalities: Increase in newborn malformations, Increase in congenital disorders, Increase in mental disabilities, Increase in physical disabilities, Decrease in IQ, Increase in premature births, Increase in MMR, Decrease in the body's immune system Injuries And Trauma: Frostbite, Heatstroke, Increased electrolyte disorders, Scorpion sting, Snake bite, Increased mortality, Dehydration, Increased trauma, Increase in seizures, Increase in animal bites, Increase in burns, Increase in fractures, Drowning, increase in accidents floods have health | 2023-10-19T14:05:58.842Z | 2023-10-18T00:00:00.000 | {
"year": 2023,
"sha1": "11a0343a7ee8ee2f7b4ac4cbf81242d73ba475a9",
"oa_license": "CCBY",
"oa_url": "https://bmcmededuc.biomedcentral.com/counter/pdf/10.1186/s12909-023-04769-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fe1079b8c1e6949e49a34e697d70d95372a8bd30",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": []
} |
116631393 | pes2o/s2orc | v3-fos-license | COMFORT FROM THE PERSPECTIVE OF FAMILIES OF PEOPLE HOSPITALIZED IN THE INTENSIVE CARE UNIT
This study aimed at understanding the meaning of comfort to the families of people in intensive care units. It consists of a qualitative study carried out in the intensive care unit of a hospital in Salvador-Bahia. Fourteen family members were interviewed. The authors utilized the theoretical principles of symbolic interactionism and the technique of qualitative data analysis. Results indicated that the categories Safety, Receptiveness, Information, Proximity, Social and Spiritual Support, Convenience and Integration expressed the meaning of comfort, which was comprised of reliability in terms of technical-scientific competence and a supportive and sensitive attitude of the team, chance of recovery, access to information and the opportunity to be close to the patient, support of people in their social life, spiritual sources and the environmental structure of the hospital, preservation of self-care and routine activities. It was concluded that the family is important as objects and subjects of the actions in healthcare and must be the focus in public health policies and programs in Brazil. DESCRIPTORS: Family. Health care. Family nursing. CONFORT EN LA PERSPECTIVA DE FAMILIARES DE PERSONAS EN UNIDAD DE TERAPIA INTENSIVA RESUMEN: Se objetivó con el estudio comprender el significado de confort para familiares de personas en Unidad de Terapia Intensiva. Esta investigación cualitativa fue realizada en la Unidad de Terapia Intensiva general de un gran hospital, en Salvador-Bahia, Brasil. Catorce familiares fueron sometidos a la encuesta. Se utilizó la teoría del Interaccionismo Simbólico y se empleó la técnica de análisis de datos cualitativos. Las categorías Seguridad, Acogida, Información, Apoyo social y espiritual, Proximidad, Comodidad e Integración consigo y con el cotidiano expresaron los significados de confort que involucró la confianza de la competencia técnico científica y actitud solidaria y sensible del equipo, oportunidad de recuperación, acceso a la información y posibilidad de acercarse al pariente, apoyo de personas del entorno social, de fuentes espirituales y de la estructura ambiental del hospital y preservación del cuidado personal y actividades habituales. Se concluye que la familia es importante como objeto y sujeto de acciones en salud y debe estar enfocada en las políticas y programas de salud pública en Brasil. DESCRIPTORES: Familia. Atención a la salud. Enfermería de la familia.
INTRODUCTION
] Comfort promotion is related to the proposal of integral care, which indicates the need to ease the suffering of people in all contexts of health care, including the hospital scope. 4However, the biomedical model, dominant in the practice of health services, demands the rendering of services on a large scale and with high costs, which leads to the fragmented organization of the work process.This model has reached the family, since it commonly disregards their needs and the needs of their family members, which evidences the focus of the professional actions centered on the ill body.Comfort promotion requires that caregivers value the autonomy, beliefs and expectations of the family member related to the care of his relative.The family must be considered a subject in the relationships established between professionals and patients and, therefore, demands attentive listening and consideration of its subjectivity, which is the first step in respecting its autonomy. 5he literature points out that family members who have a hospitalized relative may find themselves biologically, psychologically and socially fragile, since illness and hospitalization constitute events that produce discomfort, including suffering and changes in roles and habits of daily life, as well as the uncertainty of recovery of the relative.3][4] Nevertheless, the lack of studies approaching the theme of comfort in the perspective of families in the intensive care unit (ICU), with a multidimensional perspective, is real.The literature addresses the meaning of the comfort construct comfort for people in the ICU; however, it is limited in terms of focusing on the families. 3onsidering the lack of studies approaching this theme -the importance of comfort promotion as an objective of nursing care practices -the family must be seen as a subject in health practices embracing an integral view.With the conviction that the positive responses of the hospitalized individual to their treatment may result from the affectivity established in this social network, it becomes fundamental to answer the central question of this study: what is the meaning of comfort for families who are experiencing the hospitalization of a relative in an ICU?This understanding will allow us to reflect on the care provided to the patient's family, as well as guide the teaching, research and practices in nursing aimed at promoting comfort.
In this context, the constituted study purpose was to understand the meaning of comfort for the families of people experiencing a critical health condition who are hospitalized in an ICU.
METHOD
Considering that comfort and discomfort are products of interaction, understanding must be sought regarding the interactions of the person with himself and with those surrounding him during the treatment. 1Therefore, the perspective of symbolic interactionism was used in order to understand the meaning of comfort as a subjective state, in light of the interactions experienced by the family in face of situations of disease and treatment of their relative.The basic premises of this approach 1,6 allow the understanding that the experience of comfort and/ or discomfort is associated with the interactions established by the person at certain moments, and to explain the experience as a process and result of these interactions.This approach also makes it evident that the meanings of comfort are presented and modified in the interactions established by the person in the experienced situations (to himself, to other beings, situations and objects), and that the action of people, related to comfort or discomfort, are based in the meanings he attributes to the people/ objects he interacts with. 1 The study type is exploratory with a qualitative component, developed at the general adult ICU of a large public teaching hospital, located in the city of Salvador, Bahia, which only treats patients of the Single Health System.
Freitas KS, Menezes IG, Mussi FC The participants were the families of adults hospitalized in the ICU who met the following inclusion criteria: 18 years or older; being the closest relative to the hospitalized individual (family bonds); having a relative in the ICU for more than 24 hours; and having the ability to report and verbalize the experience.
The The first approach to the family members was made before or after a visit to the hospitalized relative, in the waiting room of the ICU.After the study objectives were clarified, the authors invited the family member to participate in the study and verified their interest, after which a scheduled time was agreed upon for the interview.The interviews took place in the period between August and October 2010, in a private room of the ICU, and took place after the agreement and signature of the Free and Clarified Consent Form.
The interviews were recorded and were later fully transcribed.Their duration varied between 25 and 40 minutes.They were performed until data saturation was reached; in other words, until there were no new data, in parallel to the growing comprehension of the identified concepts (categories).The criteria to determine saturation consisted of a combination of the empirical limits of the data to the integration and density of the categories and the theoretical sensitivity of the analyst. 7Fourteen family members participated in the study.
The data related to the characterization of the study participants were analyzed as percentages and presented descriptively.The technique of qualitative data analysis was used to analyze the answers to the open questions. 7Thus, in the first stage of data analysis the answers obtained in the interviews were examined exhaustively, line by line, in order to extract the first codes.Through the comparison process, these codes were grouped by similarities and differences to form the categories.The abstraction reached in the stage of data analysis allowed the categories to receive their respective names, which represented the meanings of the codes they grouped.This was a careful analytical process, through which gross data were split, examined and compared internally, while concepts were identified, developed and named.Therefore, as the analysis advanced, the categories were built, recoded and compared internally. 7In order to maintain the anonymity of the family members, they were identified in the statements according to their order of participation in the study.
Sociodemographic characteristics of the family members
The fourteen family members interviewed were predominantly female (64.3%), with the average age of 38.2±11.9years, a complete high school education (64.3%), were married (64.3%) and Catholic (57.1%).As for their employment situation, 57.1% were employed and 14.3% unemployed.The monthly family income of 57.1% of the participants was lower than three minimum wages; for 35.7% monthly family income varied from four to five minimum wages, and 7.1% earned six to ten minimum wages.The prevalent family bonds to the hospitalized relative were: sibling (35.7%), spouse (14.3%), mother (14.3%), child (14.3%), other (uncle, nephew) (14.3%) and father (7.1%).Most family members did not have any prior experience with having a relative hospitalized in an ICU (71.4%).
Meaning of comfort in the perspective of family members of people in an ICU
The understanding of the meaning of comfort took place with the analysis of the statements of the family members, which revealed the interaction established among the family members and the relative in a life-threatening situation in the ICU, the family and the hospital care team and the material elements of the hospital structure.Data analysis permitted the identification of seven categories that, in their group, expressed the meaning of comfort for the family members, namely: Safety, Receptiveness, Information, Social and Spiritual Support, Proximity, Convenience and Integration to themselves and the routine.
Comfort from the perspective of families of people hospitalized...
Safety
This category showed that comfort meant the faith of the family members in the technical-scientific competence of the health team, as well as the possibility of recovery of their loved one as a consequence of being in "good hands" and in a place that offers resources for the necessary treatment, conditions that guarantee the survival of the relative.For the family members, comfort meant the perception that the relative was being well-treated by skillful and expert professionals, that the team acted immediately in order to meet the care and treatment needs of the relative, and observed him with attention and responsibility.Therefore, the interactions of the family members with technically and scientifically competent professionals, who are committed to the quality of the care they offer, promote comfort.
Comfort is seeing that the health team is taking care of my son, do you understand?This makes me feel comforted, calmer.On the day it happened, I arrived here at the hospital, and I was comforted because the service was fast (I 1).
They shower him, take care of his oral hygiene, put on lotion, they take care of him… he is clean.They are always watching him, giving his medicine at the right time, always thoughtful, right?(I 5).
Comfort is knowing that he is being well treated and it is possible to see that because he is clean, they change his clothes, it makes us feel more relieved. It is certainly not a complete relief, but we know he is being observed and monitored. This is the comfort (I 9).
Everything she needs the ICU provides.Intensive care, right?So I am really calm due to that, it brings comfort to me and to her [...] so I think this place gives comfort (I 12).
The most comfortable moment is when the doctor says he is reacting, getting better, and that he will get out of this situation (I 4).
Another meaning of comfort was attributed to the interactions of the family to the humanity of the hospital care team, which was expressed by the following described category.
Receptiveness
For the family members, comfort meant being well-received; that is, considered as a person of importance in the interpersonal relationships with the team members of the hospital care system.Therefore, in their perception it meant respect, acceptance and appreciation by the professionals, being heard and understood, perceiving that the people of this system cared regarding the suffering of the family and tried to minimize it.Comfort also meant kind service by the people at the hospital reception and in the ICU, being treated with calmness, greeted with a smile or approached with a conversation, receiving information in a kind and understandable way, perceiving good will and sincerity on the part of the professionals.The team attitude, by showing interest in their needs such as checking to see if they were ready for the visit, escorting them to the relative's bed, offering an explanation for delays in the visit and guaranteeing the restoration of the time lost provided comfort to the family members.
When you arrive there, they treat you really well; they give us a lot of attention.I just got here to reception, they are treating me well...They talk to us, they are really calm, thoughtful.It works by order of arrival, they call you, ask who the patient is, and take you to their bed, because you often do not know it (I 5).
The doctor who talked to me, explained it to me, she always gives me a word of comfort, even with the situation what it is, she does not hide it, she talks to me... this cheers you up, even knowing it is a difficult situation (I 6).
Last week he was having this sort of spit in his mouth.There were two girls, so I called one of them and she said: 'oh, that is normal, you just have to wipe it'.But the other girl got closer, cleaned him, told me it was normal and explained what was happening.And the other did not care, she just said: 'this happens.He just needs to be cleaned!'That upset me, I did not say anything, but it upset me.The other came to me, to him, and did everything.She was great!(I 2).
When we arrive, we are well received at the reception.The doctor updates us, explains it properly.The nurses are thoughtful of us too.Whenever I ask something, they answer it kindly, with no difficulties; I have never been treated badly or with ignorance.They always tell you good morning or good afternoon, try to get information on how my father is.When we are putting clothes on they always ask: 'are you ready?'.They are always kind and willing (I 6).
The meaning of comfort to the family was also associated with their ability to get needed information regarding their relative, as the category Information revealed.
Information
Comfort meant receiving information that allowed the family members to be aware of the health reality of their relative.It meant receiving clear, true and sincere information from the health professionals, which divulged the real health condition of the relative and the treatment he was receiving, as well as everything related to their clinical evolution and what would happen in terms of exams, transfers and eventual discharge from the ICU.It also meant having access to information either in person or by telephone, at least on a daily basis, both in the institutionally determined moments, for instance in the medical report, and when the family judged that further information was necessary.
In my opinion, a good explanation brings comfort (I 1).
Comfort is when the doctor gives information to me, on how he is, on how he reacted, on whether he had a setback (I 4).
People are always giving information.Just now, the doctor talked to me, she explained it and she always has a word to comfort me, even in the face of a difficult situation, she does not hide things, she tells me… that cheers me up, even though knowing it is hard (I 6).
Everyday there is a medical report, a conversation about the patient; they tell me everything about him, what is happening to him, what his day was like, everything they are doing to him.We know it.She explains everything in the medical report, whether the kidneys have stopped working… or started working again (I 7).
The attention of the doctor after the visit, explaining to us what is happening, comforts us, even if this is a difficult situation! (I 9).
The support provided to the family by the hospital team was also associated with the meaning of comfort, as shown by the category Social and Spiritual Support.
Social and Spiritual Support
In the family's perspective, comfort meant receiving support from the family, friends and their religion.Social and spiritual support was related to the provision of support to the family members, aimed at providing conditions for them to express their feelings and emotions, thus enabling them to face the experience in a more positive way.The frequent presence of friends in their lives meant that they could obtain assistance for the resolution of common problems.They encouraged the family member to have more faith and hope, which were transmitted by words of consolation and prayers.
I know now that I have friends.They are there, by me side, giving me strength.It is times like these that we need a friend and, honestly, I never thought I had so many friends, people I did not even know well... are calling, asking for information, staying there with us, giving support (I 2).
The family acknowledged that the spiritual support and faith in a superior being provided comfort, because by having their beliefs strengthened, the hope in the recovery of their relative remained strong.
The suffering we see her going through... completely cut, every day that goes by they operate and they open her again and so it goes... sometimes we just feel like losing faith, but we cannot... there is so much faith we cannot lose it, she is coming back home...I have faith in God that she is coming back, because the way she is now is only by the hands of God (I 6).
The opportunity to remain close in proximity to the relative in the ICU was another source of comfort revealed by the family and expressed in this category.
Proximity
Comfort meant being together with the relative physically and emotionally, and enjoying the interaction established between them.Proximity also meant having the opportunity to verify, monitor and observe the relative's condition closely, identifying what his/her needs are.It meant spending the day with the relative in the ICU or even in the waiting room, without a restriction on the number and times of visits, being with the relative at the determined time and having the visit started at the scheduled time.If comfort meant being together, this presence was strengthened when the relative hospitalized in the ICU was capable of interacting; in other words, listening and expressing himself and when his family members identified that he could perceive their presence by his side.
When I get here, I see my son sleeping and he cannot even look at me, it is a difficult situation... when he opens his eyes when I arrive it is a comfort (I 1). I would like to come every half hour and see her, just keep looking at her, because then you feel calmer (I 3).
Not being present the whole time is discomforting.If a companion could stay it would be more comfortable for us, for the entire family, knowing that there is someone with him, so that he can feel we are there for him.Because I believe he can hear me, when I talk to him I think he can hear, and if he could see his son there with him it would be a great comfort for the whole family (I 6).
If I could, I would stay here, to really see what is happening. If I could, I would stay the whole time, because I can only trust what I see. I do not trust just by hearing the doctor say: 'Oh, she is fine!' I really have to see it with my own eyes (I 10).
Elements of the environment and the physi-cal structure of the hospital institution were also objects of family interaction and were associated with the meaning of comfort, as revealed by the category Convenience.
Convenience
Convenience meant the comfort of access to a place to have a meal and get a drink of water, having appropriate accommodations to sit and even spend the night, having a restroom and a waiting room near the ICU and a television to watch at the hospital.
The area should be more comfortable, more pleasing to smell, and there should be a nicer chair.There should be a view that the person would feel glad to look at, because the person is there and soon he will have the surprise of seeing the patient who is there (I 13).
I arrived at the hospital in the morning, and there was no access to get in...I could not even get a badge to stay in the hospital...I had to stay outside, in front of the hospital on the bench, so I took a nap... (I4).
The meaning of comfort for the family members was also related to the integration to themselves and the routine, as described in the following category.
Integration to themselves and to the routine
Comfort meant the ability of the family member to take care of not only himself, maintaining the usual activities and family life, but also the relative in the ICU.Unfortunately, this comfort was rarely experienced by the family members because their attention was centered on the hospitalized relative and on the possibility of his/her loss and the demands of hospitalization.Having a relative in the ICU meant difficulty in maintaining the integration with themselves and their routine due to the compromised nature of their sleep, rest and nutrition and the continuity of the family life, activities and projects, as before the admission of the relative into the ICU.
When I get home and do not see my son (he is really close to me), I cannot sleep, I keep worrying, I cannot eat. I have already lost 5 kg. After he went there I saw my weight and I almost could not believe it, it is really scary (I 1).
My life is not how it used to be.I wait for him to get home from work, take a shower, have dinner, go to school, and get back from school.I have to tell myself not to be nervous...I have not been to the gym; there is no logic in going to the gym with my son in this situation...I cannot eat well, just a little, and I sleep only a little (I 2).
I am not home, I am not with my daughter who is only eight years old. Sometimes I miss her... my children are grown up but they need a mother (I 10).
My life is a catastrophe.Before it was all about studying-I was studying to get into college, and now, it is not like I am giving up, but now my life has totally changed [...] if she was sick, she is being treated but in her situation, we do not know what might happen (I 11).
DISCUSSION
Family members are predominantly women, young adults, married, followers of a religion, have completed their high school education, have a low monthly income and work.They were predominantly siblings, spouses, mothers and children of the hospitalized relatives. 9he study results expressed the meanings of comfort for family members who have a relative in the ICU.The identified categories characterized the multidimensionality of the phenomenon, as already verified by other authors, 1,[10][11][12] and expressed that the promotion of comfort requires sensitivity, rationality and material conditions in terms of the care provided to the family and their relative.The identified meanings of comfort also permitted the reinforcement of the idea that this phenomenon is a positive, subjective and dynamic experience that changes in time and space and results from the interactions established by the individual with himself, the environment, those surrounding him and the situations he faces. 1he Safety category evidenced that the family, by reacting to the risk to their relative's life, trusts in the medical-scientific rationality in the hopes that this will guarantee survival.In this context, comfort results from the trust in the technical quality of the service, which makes the ICU emerge not as a place destined for the end of life, but as the scene of possible recovery of the being's strength. 1The demonstration of technical efficacy is essential for the family's trust in the health team and, thus, for the promotion of comfort.Safety, which is considered to be a fundamental attribute of comfort promotion, derives from the relationship established with the ICU professionals during the hospitalization, going past the technological apparatus and the qualified team of the ICU to mean an essential source of safe care and "cure".Comfort derives from the conviction that the relative is in a place that offers the best conditions for recovery.
The Receptiveness category showed that comfort means the interaction with professionals who are sensitive to the family's needs.The attention provided by them allows the family to feel accepted and acknowledged as a participant in the care process of the relative.Receptiveness also meant the possibility of meeting an expectation or need expressed by the family in face of the disease and the hospitalization of the relative.This professional attitude is expressed by words and kind behavior, by the offer of help and information, and by the manifestation of concern for the well-being of the family. 10,13mong its numerous meanings, receptiveness in healthcare means "[...] attention, consideration, shelter, receiving, serving, giving credit to, listening to, admitting, accepting, taking into consideration, offering shelter, protection of physical comfort, having or receiving someone", 14:291 attributes of integral healthcare.Comfort, in terms of receptiveness, means trying to understand what the family members say and need.In order to do this, it is fundamental to have an authentic interest in listening to those whom we intend our good practices to help.Attentive listening constitutes an essential element for the promotion of comfort, as expressed by this category.The relational distance between the ICU professionals and the family may mean discomfort due to the lack of comprehension regarding the situation faced and the disregard of the importance of their participation in the care of their relative.Verifying the interest of the professionals in answering the family's questions favors the approach and, consequently, the comfort extended to the family.This is an important action in nursing. 15omfort for the family meant being informed regarding the evolution of their relative's clinical condition.The information that was offered clearly and sincerely, allowing the family to understand the true condition of the relative, was considered to be a comfort, helping to minimize questions and, sometimes, the fear of uncertainty regarding the relative's destiny or the lack of control over the situation.This need is an important one, as families who are "surprised" by a complication in their relative's care may suffer undue distress.Nurses and other health professionals must anticipate information required by the family members.][18] Social and spiritual support constituted another meaning of comfort, showing that in situations of crisis significant people play important roles, taking care of children, guiding parents, helping in the performance of daily activities and helping to resolve everyday problems.Therefore, the family care becomes strengthened by the network of social support formed by relatives, friends and neighbors. 19In this perspective, the nurse can recognize that everyone involved requires support (not just the patient) and can use reason in the application of hospital routine and rules, aimed at ensuring that relatives can stay with their loved ones in the hospital.The search for spiritual support was also noted, and presented comfort to families.Several family members engaged in individual prayers, group prayers and promises to God.] Comfort, according to the family members, meant being able to be close to their relative as much as possible.8] Here, the authors reassert the idea of the importance of individualizing the hospital routines and rules in order to favor the visit of a relative whenever it is requested or it is perceived as necessary.Hospitalization in the ICU, besides generating a rupture in the family life, aggravates this situation when it restricts the proximity of the family member to the relative.The suffering regarding this separation cannot be eliminated, but it may be relieved by the constant ability of the family to establish a relationship of exchange and involvement with the relative. 16n the family members' views, comfort also meant Convenience; in other words, having quality surroundings and amenities within the hospital environment.Other studies pointed out that the waiting room was the place where families stayed the longest, since they wanted to be close to the relative and to feel useful.Therefore, it is necessary that this space be pleasant and provide appropriate accommodations.For this reason, it must be spacious, clean and private, with comfortable reclining chairs, especially for those who wish to spend the night.It must offer access to the Internet, restrooms, snacks and good communication with the ICU. 10,18,21It is possible to increase the comfort of family members by providing pleasant accommodations.
Comfort meant being able to take care of themselves and maintaining their regular lives, in spite of having a relative in the ICU.Nevertheless, the hospitalization of a relative generates changes in the dynamics of the family and personal life due to the intense anxiety and insecurity resulting from the disease and the absence of the relative in the routine of the family relationships, leaving a gap to be filled. 21This complicates life for the family members in meeting demands for personal care, as highlighted by the participants in this study, who described the following: lack of appetite, difficulty in resting due to the absence from home and tension and anxiety in face of the uncertainty regarding the relative's recovery.Studies have shown that lack of self-care is observed in family members, who suffer mainly from the loss of sleep, nutrition and sexuality.They also revealed that the family members who were most frequently absent from home worry about their house, are unable to relax far from their children, underestimate their own health problems, do not verbalize their anxiety and fears well and do not participate in leisure activities. 19,21t is inferred that the meaning of comfort related to the integration of themselves into the routine is difficult to translate into the experiences of family members with a relative in the ICU, but it is understood that promotion of social and spiritual support, receptiveness, proximity of the family to the relative, and security in the provision of care and information may, indirectly, contribute towards this integration.
The study results provide clear direction for the nursing team in terms of practices to promote the comfort of family members.Therefore, nursing professionals need to consider family members as also being under their care, and be attentive to the behavior, gestures, attitudes and forms of communication between themselves and the family members, since the interactions established may generate either comfort or discomfort.The approach and involvement offered to the family must be valued and may be facilitated by an empathetic and sensitive attitude towards the family.There resides a challenge to be achieved by the interdisciplinarity of health practices, by the adoption of a family focus in the scope of policies and programs in public health in Brazil, and by the incorporation of the family as both an object and subject of the health actions.Undoubtedly, the family must constitute a target for public policies so that human resources are qualified and made available at health institutions, aimed at minimizing the difficulties of integrating the family into the care unit.The education of the health professionals must be based on the biomedical model, distant from the relational and intersubjective universe of the family.
The revealed meanings of comfort showed that, for the promotion of comfort of family members to take place, the nurse(s) must act in an integrated way with the health team, proposing, carrying out and evaluating health care practices related to receptiveness, social and spiritual support and communication with the family.It is also important to point out to hospital administrators the elements of the hospital physical environment associated with the family's convenience and comfort (and those that are not).It is essential for the health team to receive technical-scientific education regarding new-generation technologies oriented to the rendering of safe care.Based on these recommendations, the health team may perform, in a consistent and grounded way, family care practices, thus obtaining success in comfort promotion.
CONCLUSION
The study aimed at identifying the meanings of comfort for family members with a relative in the ICU, who expressed themselves in six categories, namely Safety, Receptiveness, Information, Proximity, Social and Spiritual Support, Convenience and Integration to themselves and to the routine.
These categories showed that comfort meant confidence and faith in the technical-scientific competence of the health professionals; perceiving solidarity and a sensitive attitude of the people in the service system; having access to and being aware of the condition of the hospitalized relative; being close to the relative physically and emotionally; receiving help from people in their social life; having sources of spiritual support; enjoying the support of the physical and environmental structure of the hospital and being able to take care of themselves and maintain the dynamics of their daily lives.
The incorporation of the family as both object and subject of health actions, as well as the focus of the full scope of public health policies or programs in Brazil was considered important.
It is important to highlight as a possible limitation of this study its development in a restricted context; in other words, a single public ICU of a hospital located in the municipality of Salvador Bahia.The lack of literature on the family and comfort theme is evident, which hints at the need for the replication of this study with other population groups, at other public and private ICUs in Salvador and in other states of the country, so that it is possible to broaden Freitas KS, Menezes IG, Mussi FC the concept of comfort in the perspective of family members who experience the hospitalization of a relative in an ICU.The lack of studies of this nature may have limited the comparison of findings of this study.The authors also propose the construction of comfort indicators, aimed at measuring, in clinical practice, this construct to evaluate the effectiveness of the care provided to family members with a relative in a life-threatening situation.
study began after it was approved by the Committee of Ethics for the Analysis of Study Projects, in July 2009, under protocol CEP nº.022/2009.The aspects contained in Resolution no.196/96 of the National Health Council were respected in order to guarantee the protection of the study participations. | 2019-01-09T01:32:33.811Z | 2012-12-01T00:00:00.000 | {
"year": 2012,
"sha1": "40088c07df688381d4b879a85943726384d6d2e4",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/tce/a/GJ5HVvYvs4rbqJMyyDd7B8N/?format=pdf&lang=pt",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "f44f3549872620998502c40ce47d0f958479971a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257898938 | pes2o/s2orc | v3-fos-license | Effect of Mg Contents on the Microstructure, Mechanical Properties and Cytocompatibility of Degradable Zn-0.5Mn-xMg Alloy
The effect of magnesium (Mg) content on the microstructure, mechanical properties, and cytocompatibility of degradable Zn-0.5Mn-xMg (x = 0.05 wt%, 0.2 wt%, 0.5 wt%) alloys was investigated. The microstructure, corrosion products, mechanical properties, and corrosion properties of the three alloys were then thoroughly characterized by scanning electron microscopy (SEM), electron back-scattered diffraction (EBSD), and other methods. According to the findings, the grain size of matrix was refined by the addition of Mg, while the size and quantity of Mg2Zn11 phase was increased. The Mg content could significantly improve the ultimate tensile strength (UTS) of the alloy. Compared with the Zn-0.5Mn alloy, the UTS of Zn-0.5Mn-xMg alloy was increased significantly. Zn-0.5Mn-0.5Mg exhibited the highest UTS (369.6 MPa). The strength of the alloy was influenced by the average grain size, the solid solubility of Mg, and the quantity of Mg2Zn11 phase. The increase in the quantity and size of Mg2Zn11 phase was the main reason for the transition from ductile fracture to cleavage fracture. Moreover, Zn-0.5Mn-0.2Mg alloy showed the best cytocompatibility to L-929 cells.
Introduction
Researchers prefer biomedical degradable metal materials because of their high mechanical properties, biocompatibility, non-toxic degradation products, and other outstanding attributes. Among them, numerous studies have been conducted on biodegradable metal materials, such as zinc (Zn) and zinc-based alloys, magnesium and magnesium-based alloys, and iron (Fe) and iron-based alloys. The main disadvantage of magnesium and magnesium-based alloys is that the degradation rate is too fast, resulting in an increase in local pH value [1]. According to a number of papers, the inclusion of calcium, manganese, strontium, aluminum, zinc, rare earths (RE), and severe plastic deformation is helpful to increase the corrosion resistance of pure magnesium [1][2][3]. Henderson et al. studied the as-extruded Mg-1 wt.%Ca-0.5 wt.%Sr alloy, and the outcomes demonstrated that the extrusion procedure enhanced the tensile and compressive properties and decreased the rate of degradation. A relevant cell line's low toxicity for a variety of dilution and time conditions was also confirmed by cytotoxicity testing on MC3T3-E1 [4]. Grain refinement is beneficial to improve the tensile brittleness. The hot-rolled Mg-2Al-1Zn-1Caalloys' elongation increased by 40% as a result of the addition of alloying elements, according to Chaudry et al.'s pure magnesium and Mg-2Al-1Zn-1Ca alloys. This increase in elongation was attributed to the substantial texture weakening and high Schmidt factor (SF) for the non-basal slip system [5]. Iron and iron-based alloys have better mechanical properties than magnesium-based alloys, but the degradation rate in vivo too slow. Fe-Mn alloys, which are frequently employed in orthopedic prostheses and vascular stents, are thought to be the most promising biodegradable biomedical materials [6,7]. Additionally, Huang et al. selected Pd and Pt as alloying elements to prepare Fe-5 wt.%Pd and Fe-5 wt.%Pt alloys, and the results showed that Pd and Pt significantly increased the corrosion rate of iron alloys. Additionally, although both alloys have somewhat higher hemolytic contents than pure iron, they both have lower hemolytic contents than 5% [8]. Therefore, improving the degradation of pure iron is one of the key factors to expand the biological application range of iron-based alloys.
Zinc has a good osteogenic effect and has a suitable electrode potential, which solves the corrosion rate problem of iron and iron-based alloys and magnesium and magnesiumbased alloys [9]. However, as-cast pure zinc has many shortcomings, such as poor mechanical properties, the elongation is usually less than 3%, and so on, it has no engineering significance [10,11]. The major slip plane of pure zinc's conventional hexagonal closepacked (HCP) structure is the basal plane {0001}. Just two independent slip systems may be provided by this slip system, which causes it to frequently develop cleavage fractures and display brittle fracture behavior. Some reports stated that the basal slip of zinc is {0002}<1120>, the conical slip system includes {1122}<1123>, and the cylindrical slip is {1010}<1120>. When the non-basal slip is compared with basal slip, slip actuation is more difficult than basal slip. Although plastic deformation slightly changes its mechanical properties, these are far from the strength requirements of degradable biomaterials. The methods to improve the comprehensive properties of pure zinc include alloying and plastic deformation. The selection of alloying elements is limited; these are usually essential elements of the human body, such as magnesium, manganese, calcium, strontium, iron, lithium, and others [12,13]. Plastic deformation can refine the grains and make its microstructure more uniform. Commonly used plastic deformation methods include rolling, rotary swaging, hot extrusion, equal-channel angular pressing, hydrostatic extrusion and drawing, and so on. The mechanical properties of Zn-Mn alloy could be improved significantly by proper processing and heat treatment [14][15][16][17].
As degradable biomaterials, zinc alloys are widely used in vascular stents, intramedullary nails, interference screws, bone nails, and others [13,[18][19][20]. More recently, they were even proposed to be used in glaucoma drainage devices [21]. Of course, Zn-Mn alloys have excellent biocompatibility, osseogenesis, and mechanical integrity after implantation [18,22,23]. Magnesium can promote the production of new bone and has favorable biocompatibility. Based on the good plastic condition of Zn-0.5Mn alloy, we can strengthen this alloy by adding new alloying elements. The strengthening effect of magnesium is second only to that of lithium, and the solid solubility of magnesium in zinc is extremely low at room temperature; its primary mode of existence is the second phase [12,24,25]. Additionally, one of the crucial ways to enhance the alloy's overall qualities is by refining the second phase [11,26]. If the magnesium content is too high, then more eutectic structures are formed, which seriously decreases the mechanical properties of the alloy [9,27,28]. Sometimes, when the content of alloying elements is high, the strength will not increase, and the plasticity will decrease rapidly [24]. Therefore, designing the addition amount of alloying elements is an important way to obtain alloy materials with excellent properties.
As-extruded Zn-Mn-Mg alloys with varying magnesium contents were investigated in this paper to investigate the impact of magnesium on mechanical properties, corrosion properties, and biocompatibility, as well as to provide data for future development of biodegradable materials.
Alloys Preparation
The smelting, casting, and hot extrusion processes of the materials studied in this paper were all completed by Ningbo Powerway Alloy Materials Co., Ltd. Prior to raising the temperature of the argon protection furnace to 650 • C to 700 • C, 90% of the total amount of zinc to be smelted and the manganese-magnesium master alloy were added. Then, after adding all of the remaining zinc, the temperature was held steady for 3 to 5 min. At a casting temperature of 600 to 650 • C, it was poured into a steel mold (60 mm in diameter). Then, the ingot was cut to Φ50 mm, annealed at 260 • C for 2.5 h to 3 h, and finally extruded through the mold. Alloy bars with a diameter of Φ11.2 mm were obtained, and the extrusion ratio was 20:1. The three alloys with different compositions were named Zn-0.5Mn-0.05Mg, Zn-0.5Mn-0.2Mg, and Zn-0.5Mn-0.5Mg, respectively. Table 1 showed the real components of the alloys tested by the inductively coupled plasma optical emission spectrometer (ICP-OES).
Microstructure Analysis
An X-ray diffractometer (Bruker AXS, Billerica, MA, USA, D8 Advance) was used to determine the metals' phase. MID Jade 6 software was used to analyze the test findings. The test parameters included a Cu Kα target with a scanning range of 10 • to 90 • , a tube voltage of 20 KV. The optical microscope (AXIOLAB 5, Zeiss, Oberkochen, Germany), scanning electron microscope (SEM, FEI Quanta 250 FEG, FEI, Hillsboro, OR, USA) with an energy dispersive spectrometer (EDS), and transmission electron microscope (TEM, Talos F200X) were all used to study the microstructure. The average grain size, orientation map, and inverse pole figure of the alloys were examined using electron back-scattered diffraction (EBSD, Thermo Scientific Verios G4 UC, Thermo Fisher Scientific, Waltham, MA, USA) on their cross sections. All samples were etched with a mixture of 10 g chromium trioxide (CrO 3 ), 2.5 mL nitric acid (HNO 3 ), 0.75 g anhydrous sodium sulfate (Na 2 SO 4 ), and 50 mL deionized water (H 2 O).
Mechanical Property Tests
Tensile samples were created using the ASTM E8-04 standard, as shown in Figure 1. Tensile tests were performed using electronic universal testing equipment (MTS E43) at room temperature with a tensile speed of 1 mm/min. Three samples were prepared for each alloy. Using a Vickers hardness meter (MVS-1000D1) with a load of 200 g and a measurement time of 10 s, the microhardness of the alloys was determined. For each alloy, an average value of seven points was determined.
Immersion and Electrochemical Experiment
The immersion tests were conducted in stimulated bodily fluid with a pH of 7.40 ± 0.01 and a constant temperature of 37 • C. According to ASTMG31-72 standard, 20 mL/cm 2 was chosen as the sample surface area to simulated body fluid (SBF) volume ratio. After the samples were extracted according to the set time, the samples were first placed in a 200 g/L chromium trioxide solution for ultrasonic cleaning for 10 min to remove corrosion products. The corrosion rate calculation formula is as follows [9]: where CR (mm/y) is the corrosion rate, ∆W is the weight loss (g), K is the time conversion coefficient (8.76 × 10 4 ), T is the immersion period (h), and A is the starting surface area (cm 2 ). D stands for the material's density (g/cm 3 ). The electrochemical workstation (PGSTAT 302) with the three-electrode system was used to conduct the electrochemical and electrochemical impedance spectrum (EIS) tests in this study. The saturated calomel electrode served as the reference electrode, the platinum electrode served as the auxiliary electrode, and the sample served as the work electrode [14].
Cytotoxicity Tests
Using mouse fibroblast (L-929) cells, all alloys' cytocompatibility was assessed by MTT colorimetric analysis. The extraction liquid was prepared according to ISO 10993-5:2009 standard, Extraction medium was made in Dulbecco's modified Eagle medium (DMEM), which contains 10% fetal bovine serum, in a humidified environment with 5% carbon dioxide (CO 2 ) for 24 h at 37 • C. The sample surface area to extraction liquid volume ratio was 2:3. The cells were cultivated for 24 h in a humidified atmosphere containing 5% carbon dioxide in 96-well cell culture plates with 1 × 10 4 cells/100 L of medium, and then allowed to attach [23]. After 24 h, the cell culture medium in the culture plate was replaced with different concentrations of extraction liquid, 100 µL was added to each well, and 100% cell culture medium was added to the negative control group. Each group was experimented with 6 wells. Cells were then cultured for 24 h and 48 h. MTT (10 µL) was added to each well and incubated for 4 h. Dimethyl sulfoxide (DMSO) was then poured into each well at a volume of 150 L. An enzyme-labeled (Bio-radiMark (168-1130), Hercules, CA, USA) equipment was used to detect the spectro-photometrical absorbance at 550 nm. Cells' relative growth rates (RGR) were computed as follows [23]:
Microstructure of Zn-Mn-Mg Alloys
The XRD pattern results illustrated that the second phases of the alloys were MnZn 13 and Mg 2 Zn 11 , respectively. The diffraction peaks of Mg 2 Zn 11 were gradually enhanced, indicating that the amount of Mg 2 Zn 11 increased with increasing magnesium content, Figure 2b. So, the as-extruded Zn-Mn-Mg alloy was composed of α-Zn, MnZn 13 , and Mg 2 Zn 11 . The optical microstructure of the cross section and longitudinal section of the alloys are shown in Figure 3. In the Zn-0.5Mn-0.05Mg alloy, fine precipitation particles were observed. Then, with increasing magnesium content, the formation of a coarse second phase in the structure was easily observed because of the increase in Mg 2 Zn 11 . Obvious twinning was found in the microstructure, especially in some larger grains, indicating that the twinning deformation easily occurred in large grains in zinc alloys [29]. Along the extrusion direction, the second phase involved the formation of banded structures due to shearing and crushing. It can be seen that the white particles contained zinc and manganese, and the black particles contained zinc and magnesium ( Figure 4 and Table 2). The zinc/manganese atomic ratios at points 1, 2, and 3 were 13.47, 20.39, and 17.42, respectively, and the zinc/magnesium atomic ratios at points 4, 5, and 6 were 5.86, 5.03, and 6.12, respectively. Therefore, the white particles were preliminarily determined to be MnZn 13 , and the black particles were Mg 2 Zn 11 [30]. The circle marks Mg 2 Zn 11 phase in the Figure 4e,f. Figure 5 shows that during the extrusion stage, the alloys with the highest and lowest magnesium experienced dynamic recrystallization and had more equiaxial grains than alloy with medium magnesium content. The average grain size distribution tended to shrink and it reduced from 2.76 µm to 2.23 µm when the magnesium content rose from low to high. The average grain size decreased by 23.7%. The three alloys' inverted pole figures revealed that the as-extruded alloys all had a weak texture of <0001>. The strength of texture was related to grain refinement, amount ofMg 2 Zn 11 , and recrystallization factors [30,31]. Some smaller dislocation arrays were found around the MnZn 13 particles, Figure 6a. This finding showed that the second phase hindered the movement of dislocations, and the opening of the slip system was hampered by the second phase's bigger size [32]. The proportions of recrystallized grains, subgrains, and deformed grains of all alloys are shown (Figure 7). This shows that the all alloys underwent partial recrystallization during the hot extrusion process. Subgrains accounted for the majority, and the recrystallization ratio first decreased and then increased. This was consistent with the results of electron back-scattered diffraction orientation map and the three alloys had a small amount of deformed grains.
Mechanical Properties
The strength and hardness of all alloys increased with increasing magnesium content, whereas the elongation decreased ( Figure 8). Obviously, Compared with Zn-0.5Mn alloy, magnesium can observably increase strength and hardness when added, and the elongation can be maintained at above 15% [23,31]. Values for Vickers hardness, ultimate tensile strength, and yield strength increased from 88.0 HV 0.2 , 336.8 ± 18.0 MPa and 257.3 ± 14.1 MPa to 96.2 HV 0.2 , 369.6 ± 11.2 MPa, and 283.3 ± 8.1 MPa with increased magnesium content (a 9.3%, 9.7%, and 10.1% increase, respectively). Elongation decreased from 27.1 ± 6.9% to 17.4 ± 2.4% correspondingly (a 55.7% decrease). The mechanical properties of the three alloys are listed in Table 3. However, these results indicated that the changes in the number and size of Mg 2 Zn 11 were the reasons for the hardness and brittleness of alloys.
The fracture morphology clearly illustrates the transition of alloys from ductile fracture to brittle fracture, Figure 9. Among them, the Zn-0.5Mn-0.05Mg was a typical ductile fracture, and the fracture surface had more and deeper dimples. Although dimples also appeared in Zn-0.5Mn-0.2Mg, the dimples were obviously shallower, and the fracture surface had obvious torn edges, inclusions, river pattern strips, and "quasi-cleavage" planes [33]. This is a typical ductile-brittle hybrid fracture. However, when the magnesium content reached 0.5 wt.%, the elongation of the alloy decreased by less than 20%. This was because of the brittle fracture and transgranular fracture caused by the deformation of the surrounding matrix caused by the separation of Mg 2 Zn 11 particles from the metal matrix [34,35].
Electrochemical and Immersion Tests
With increasing magnesium content, the alloys mainly exhibited a negative shift of E corr , and the passivation current decreased gradually, indicating that with increasing magnesium content, the corrosion rate increased. Correspondingly, the radius of the capacitive arc of the alloy became increasingly smaller with the increase in magnesium content, thereby also showing that the corrosion resistance deteriorated, Figure 10b. On the 30th day, the magnesium content ranged from low to high, and the average corrosion rates were 0.037 mm/y, 0.052 mm/y, and 0.057 mm/y, respectively. The morphology of the alloys after immersion in SBF for 30 days is shown in Figure 11a-c. The corrosion products were mainly composed of zinc, phosphorus, and oxygen, followed by phosphorus and calcium and these are listed in Table 4. Magnesium was detected at all eight locations according to the atomic ratio of oxygen/zinc between 4 and 5 and the atomic ratio of zinc/phosphorus between 1.19 and 1.56. Therefore, the corrosion products may be zinc hydroxide (Zn(OH) 2 ), tribasic zinc phosphate (Zn 3 (PO 4 ) 2 ), zinc carbonate (ZnCO 3 ), and zinc carbonate (Ca 3 (PO 4 ) 2 ). In addition, there could be a tiny amount of corrosion products, such as calcium hydrophosphate(CaHPO 4 )and magnesium hydrate (Mg(OH) 2 . The alloy with more Mg 2 Zn 11 had larger and deeper corrosion pits because of the galvanic corrosion of Mg 2 Zn 11 and the Zn matrix, which accelerated the dissolution of the Zn matrix (Figure 11d-f). Figure 12 illustrates the evaluation of the extract concentration and the RGR value of L-929 cells at various culture times. With the exception of the alloy with the least amount of magnesium, which had a lower relative growth rate value when the concentration of the extraction liquid was 100%, those of the rest were above 75%. Among them, Zn-0.5Mn-0.2Mg showed better cytocompatibility, indicating that a small amount of magnesium was beneficial to cytocompatibility [36]. In general, the relative growth rate value decreased over time.
Discussion
The maximum solid solubility of magnesium in zinc was 0.15 wt.% at 364 • C [36]. A multiphase system made up of α-Zn and Mg 2 Zn 11 at Mg wt.% > 0.008% was described in certain studies [13]. Therefore, magnesium will be partially solidly dissolved in the zinc matrix, followed by the formation of the precipitated phase. The average grain size during the alloy's plastic deformation was greatly influenced by the second phase. Compared with manganese (1.55 and 127 pm), magnesium (1.31 and 160 pm) and zinc (1.60 and 134 pm) have different electronegativity and atomic radii. So, Mg 2 Zn 11 formed preferentially compared with MnZn 13 during solidification, and its thermal stability and melting point were higher than those of MnZn 13 [37]. The preferential precipitation of Mg 2 Zn 11 at the intergranular or grain boundary hindered the growth of dendrites, thereby refining the microstructure of Zn-Mn-Mg alloys [37]. Guo, Zhu, and Shuai et al. introduced a mechanism for the grain refinement of the second phase during the deformation of zinc alloys [17,23,35]. Considering the strain incompatibility between the second phase and the grains, the primary way it shows up is in the pinning action on the grain boundary, which influences recrystallization and causes the development of low angle grain boundary, resulting in grain refinement [31].
The ultimate tensile strength of the alloys did not significantly increase with increasing magnesium concentration, according to their mechanical characteristics. However, in some studies, the addition of magnesium was not proportional to the tensile strength [38,39]. Several strengthening mechanisms, such as second phase strengthening, grain refinement strengthening, and solid solution strengthening, are responsible for the increase in strength. It can be evaluated with the following formula [35]: where σ ppt is the stress from second phase strengthening, σ d is the stress from grain refinement strengthening, and σ ss is the stress from solid solution strengthening. The Hall-Petch equation can be used to explain how power and grain size are related [9,23], and the three alloys' average grain sizes range only slightly (from 2.76 µm to 2.23 µm) ( Figure 5). Grain refinement tends to weaken texture, but is beneficial for elongation. In zinc alloys with a smaller grain size (<10 µm), the deformation mechanism is often grain boundary sliding [17]. Second, adding magnesium results in lattice distortion and raises the transmission resistance to dislocations, boosting strength. The atomic radii and valence electron structures of the solute atoms are the primary determinants of solid solution strengthening. Compared with manganese atoms (∆r = 5.2%), magnesium and zinc atoms have larger difference in atomic radii (∆r = 19.4% ≥ 15%), but at the same time, lower solid solubility limits the effect of solid solution strengthening [12,13,35,36]. The second phase is crucial because it has the potential to significantly affect the host metal's mechanical property strength in two different ways. When they are coarse and dispersed unevenly during the second phase, the host metal may weaken, and the host metal may strengthen when these are evenly distributed and fine [13,28]. However, grain refinement also makes the difference in strength less noticeable or even changes the value of the strength [40]. Therefore, a quick decrease in elongation is directly caused by the expansion of the second phase in size. Out of all the alloys examined in this research, Zn-0.5Mn-0.05Mg showed the best mechanical properties, and it can meet the clinical requirements of degradable intravascular stent [41,42]. According to the related literature reports, the size of Mg 2 Zn 11 was extremely small, reaching the submicron and micron sizes when magnesium content was 0.05 wt.% [36]. This could be one of the factors contributing to the best mechanical properties of the Zn-0.5Mn-0.05Mg alloy. This demonstrates that while hard precipitates are predicted to increase strength, an alloy's mechanical properties can still be influenced by a mix of other microstructural characteristics [13,36]. Recrystallization is also influenced by the second phase's size and distribution. Figure 7 illustrates the three metal grain types. For an alloy with the least amount of magnesium, the production of high lattice misorientation regions, which are conductive to recrystallization and nucleation, was prevented by the fine particle dispersions [23]. However, the fine second phase also hinders dislocation motion, thereby slowing down the recrystallization process. When the second phase's particles are greater in size, the deformation energy storage in the alloy can be improved, and the dislocations can be concentrated at a high density around the second phase particles, which lead to the formation of recrystallized nuclei near the second phase. In this instance, the second phase encourages recrystallization once more [30].
Conclusions
The effect of Mg contents on the microstructure, mechanical properties, corrosion behavior, and cytocompatibility of degradable Zn-0.5Mn-xMg (x = 0.05 wt.%, 0.2 wt.%, 0.5 wt.%) alloys was investigated in this work. The main conclusions are summarized as follows: 1.
The Zn-Mn-Mg alloy was composed of α-Zn, MnZn 13 , and Mg 2 Zn 11 , and magnesium had the effect of refining grain. The average grain sizes were 2.76 µm, 2.31 µm, and 2.23 µm when the magnesium content ranged from low to high; 2.
The addition of magnesium accelerated the corrosion of the alloy. The main reason is that the galvanic corrosion of Mg 2 Zn 11 and the matrix accelerates the dissolution of the alloys. The average corrosion rates on the 30th day were, respectively, 0.037 mm/y, 0.052 mm/y, and 0.057 mm/y because of the rise in magnesium content; 3.
All three alloys met the mechanical performance requirements of biodegradable materials. The as-extruded Zn-0.5Mn-0.05Mg alloy showed the best mechanical properties, whereas Zn-0.5Mn-0.5Mg exhibited the highest ultimate tensile strength (369.6 MPa). The fine second phase improved the comprehensive properties of the alloy. On this basis, the comprehensive properties of the alloy can be improved by refining the second phase; 4.
The addition of magnesium improved the cytocompatibility. On the whole, Zn-0.5Mn-0.2Mg alloy had the best cytocompatibility, followed by Zn-0.5Mn-0.5Mg alloy, and, finally, Zn-0.5Mn-0.05Mg alloy. At present, when zinc alloys are used in orthopedic materials, most problems are related to insufficient strength and slow degradation rates. By adding magnesium, we improve the strength and speed up the degradation rate while also improving the cytocompatibility. Zn-Mn-Mg alloy is an excellent candidate for the future development of orthopedic materials. Data Availability Statement: Data are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest. | 2023-04-02T15:05:32.704Z | 2023-03-31T00:00:00.000 | {
"year": 2023,
"sha1": "f3a2cdad1a8c02bc3d606e13faf2e73f63a66992",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-4983/14/4/195/pdf?version=1680568815",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "07baa77ec754869ce55eaf001bf1ea5a84e8f05e",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
} |
235479150 | pes2o/s2orc | v3-fos-license | Effect of Cyclical Microwave Modification on the Apparent Permeability of Anthracite: A Case Study of Methane Extraction in Sihe Mine, China
The application of cyclical microwave modification for accelerating the extraction of coalbed methane (CBM) from anthracite is limited. In this study, the apparent permeability of anthracite samples before and after each microwave treatment (three in total) for 120 s was measured by a self-built permeability-testing platform. Microcomputed tomography (micro-CT) technology and image-processing technology were employed to analyze the 3D micron-scale pore structures, especially the quantitative characterization of connected pores and throats. After modification, the average apparent permeability increased from 0.6 to 5.8 × 10–3 μm2. The generation, expansion, and connection of micron-scale pores and fractures became more obvious with each treatment. The total porosity increased from 3.5 to 6.2%, the connected porosity increased from 0.9 to 4.8%, and the porosity of isolated pores decreased from 2.5 to 1.4% after three cycles. The number, volume, and surface area of the connected pores as well as the number, radius, and surface area of the throats were significantly increased. In addition, the release of alkyl side chains from the anthracite surface reduced the capacity of the anthracite to adsorb CH4 and the decomposition of minerals promoted the development and connectivity of pores. As a result, the gas seepage channels have been greatly improved. This work provides a basis for micron-scale pore characterization after cyclical microwave modification and contributes to CBM extraction.
INTRODUCTION
Coalbed methane (CBM) provides a clean energy supply for the world. 1 Since the 1970s, CBM has developed into a sustainable commodity with a great economic value in the United States and Canada. 2 The CBM reserves in China are approximately 36.81 × 10 12 m 3 and rank third after Russia and Canada. 3 In the past, CBM was only discharged to avoid coal/gas outbursts or gas explosions during coal production. 4−6 It was not until 2003 that the first commercial well for CBM extraction in China was reported. 7 Subsequently, China's CBM extraction industry developed rapidly. However, the efficiency of CBM extraction is extremely low in most areas due to poor geological conditions such as low permeability and low porosity, which restricts the commercialization process.
Many approaches have been considered in the effort to stimulate coal reservoirs. Methods other than CO 2 substitution and N 2 displacement will have a significant effect on reservoir permeability. 8 One approach is fluid injection, which includes hydraulic, 9 high energy gas, 10,11 supercritical CO 2 , 12,13 liquid nitrogen, 14−16 and steam injection fracturing. 17,18 Another approach is the application of external acoustic, 19−21 electric, 22 electromagnetic, 23−26 or electrochemical fields. 27−33 Although these methods have achieved enhancements, there are still some limitations, such as water locking damage, construction costs, complex operations, ecological environmental pollution, and even the possibility of inducing earthquakes by hydraulic fracturing. 34,35 At present, microwave modification technology has been successfully used for coking, 36,37 reduction of energy for pulverization, 38 lignite dehydration, 39 coal desulfurization, 40 low-rank coal pyrolysis, 41 biomass pyrolysis, 42 enhancing floatation, 43 auxiliary rock breaking, 44 heavy oil exploitation, 45 and oil shale exploitation. 46 Microwave modification can increase the temperature of the coal or rock stratum and aid in moisture removal. Microwave radiation heating is a highly efficient and environmentally benign method for reservoir stimulation due to its unique instantaneous effects, overall penetrability, selectivity, and controllability. Microwaves are electromagnetic waves with frequency in the 300 MHz to 300 GHz range. The essence of the heating effect is dielectric relaxation; that is, under the effect of an electric field, the dipole moments of polar molecules rotate. When the electric field frequency is equal to the microwave frequency, the rotation speed of electric dipoles cannot keep up with the frequency of the microwave electric field, resulting in a hysteresis phenomenon. 47 Hong et al., 48 Xu et al., 49 and Huang et al. 50 carried out simulations of the heating behavior of coal under microwave radiation using COMSOL and reported that microwaves can rapidly heat coal. Cai et al., 51 Teng et al., 52 and Zhao et al. 53 found that the pore structures and permeability of coal change under high-temperature conditions. Wang et al. 54 found that high temperatures can induce cracks in the hot dry rock and improve its brittleness index and even increase its permeability by an order of magnitude.
Micro-CT is a high-efficiency method to analyze micron-scale pores, fractures, cracks, and cleats without causing any structural damage in coal samples. 55−57 Feng and Zhao 58 observed the characteristics of mesocrack evolution in lignite and gas coal with temperature variation. Kang et al. 59 observed and analyzed the thermal cracking process of oil shale from 20 to 600°C. The structural parameters of micron-size pores can be characterized quantitatively by the postprocessing of images. Kong et al. 60 measured the pore-fracture features of anthracite such as pore number, porosity, and average pore diameter before and after electrochemical modification combined with MATLAB software. Huang et al. 61 studied the connectivity of the pores and fractures in oil shale at different steam temperatures by digitization of cores. Kumar et al. 38 determined the cleat frequency and distribution in two cores and confirmed that new fractures were induced by exposure to high-energy microwaves. Yao et al. 62 demonstrated the capability of micro-CT to characterize the development of coal porosity and fractures and found that the distribution characteristics of porosity were highly anisotropic. Cai et al. 63 examined the evolution of a 3D fracture network under stress until failure occurred by micro-CT and acoustic emission.
In this paper, the apparent permeability was studied for anthracite samples before and after they underwent cyclical microwave modification; measurements were performed with a permeability-testing platform that was built in this laboratory. The structure of micron-scale pores before and after modification, especially the connected pores that make the main contribution to gas seepage in anthracite, were quantitatively characterized by micro-CT combined with the image-processing technology. Moreover, the surface groups on anthracite and the minerals in the coal were investigated by Fourier transform infrared (FTIR) spectroscopy.
RESULTS AND DISCUSSION
2.1. Influence of Cyclical Microwave Modification on the Apparent Permeability of Anthracite. Apparent permeability plays an important role in the evaluation of recovery efficiency of CBM extraction. 2 Li et al. found that the apparent permeability of anthracite in Qinshui Basin is in the range of 0.01 × 10 −3 to 10 × 10 −3 μm 2 . 64 The variation law of apparent permeability of anthracite after cyclical microwave modification is shown in Figure 1a, and the variation fitting results are shown in Figure 1b. It can be seen that the apparent permeability of the raw anthracite sample was in the range of 0.5−0.7 × 10 −3 μm 2 , and the results were consistent with Li et al. 2 After microwave exposures for 1, 2, and 3 cycles, the average apparent permeability increased from 0.6 to 3.6, 5.0, and 5.8 × 10 −3 μm 2 , increasing by 5.1, 7.4, and 8.8 times, respectively. The results showed that improving apparent permeability by cyclical microwave modification can effectively accelerate the extraction of CBM. The modification mechanism would be analyzed by the changes in microscale pore structures, surface groups, and mineral materials in anthracite.
2.2. Influence of Cyclical Microwave Modification on the Porosity of Anthracite. The change in pore structures after three cycles of microwave exposure is exhibited in a 3D representative volume element (3D-REV) in Figure 2. The porosity of the total pores and connected pores increased, while the porosity of the isolated pores decreased with cyclical microwave modification, as shown in Figure 3. The total porosity increased from 3.5 to 6.2% and the connected porosity increased from 0.9 to 4.8%, increases of 77.6 and 409.6%, respectively. The porosity of isolated pores decreased from 2.5 to 1.4%, a reduction of 45.3%.
This phenomenon indicated that microwave radiation had an obvious effect on the generation, expansion, and connection of micron-sized pores and fractures, which could be attributed to several factors. Images were obtained using micro-CT scanning at the same positions before and after three cycles of microwave modification, as shown in Figure 4. New micron-scale pores developed in zone A1, and the micron-scale pores and fractures in zone A2 became wider. The main reason for these changes was that the moisture in pores quickly evaporates and expands when stimulated by microwave radiation; this behavior may open some closed pores. The residual water in the coal matrix and part of the water bound to the minerals evaporate and is removed under microwave radiation; the high-pressure steam creates new pores, widens the original pores, and increases the pore connectivity. It can be seen that in zone A3, new cracks formed at the coal−mineral interfaces because the thermal conductivities and thermal expansion coefficients differ for coal and minerals, causing different temperature increases in coal and minerals under microwave irradiation. In addition, the high temperature (locally up to 369°C after 120 s of microwave irradiation) causes thermolysis in the macromolecular structure of coal. We can see in zone B that some minerals that were removed under microwave irradiation led to the formation of new micron-scale pores. The reason for the formation of new pores may have been that the microwaves catalyzed the chemical reaction of pyrite (FeS 2 ) in the coal with the surrounding H 2 , O 2 , or small molecules such as H 2 O, CO, and CO 2 adsorbed in the coal; these reactions may have released gases such as H 2 S, SO 2 , and carbonyl sulfide (COS). 65 Besides, some minerals may have been displaced and fallen into the fractures in more highly fractured regions. 38 2.3. Quantitative Characterization of Connected Pores and Throats before and after Cyclical Microwave Modification. The pores and throats of connected pores in the 3D-REV were characterized quantitatively based on a pore network model (PNM). 66 Figure 5 shows the changes in pore parameters in the 3D-REV before and after cyclical microwave modification. The pore number, pore volume, and pore surface area all increased at first and then decreased with increasing pore radius. The total number of connected pores increased from 2651 to 10,020, the total volume increased from 8.9 to 45.3 mm 3 , and the total area increased from 609.2 to 1899.6 mm 2 , the increase being 278, 410, and 212%, respectively. The number of pores with a radius of 70 μm corresponded to the maximum in the pore size distribution before and after modification, 417 and 1394, respectively. The maximum values of the pore volume and surface area occurred for pores with the maximum radii of 130 and 90−130 μm, respectively, after modification and 90−100 μm (both) before modification. The increase due to modification was the largest for the total volume of connected pores, which indicated that some of the pores became larger under the action of microwave radiation. The increase was larger for the total number of connected pores than the surface area, indicating that some isolated pores developed into connected pores during the modification process.
The throat parameters can essentially reflect the connectivity of pores. Figure 6 shows the throat parameters of the 3D-REV before and after cyclical microwave modification. The number of throats increased from 6607 before modification to 33,120 after modification; similarly, the maximum throat radius increased from 110 to 260 μm, and the maximum throat length decreased from 2710 to 1430 μm. The largest contributions to the throat surface area were throats with radii of 0−100 μm before modification and radii of 0−200 μm after modification. The decrease in throat length and increase in throat radius and surface area all indicated that the pore connectivity was better after modification.
The pore coordination number represents the mutual configuration relationship between pores and throats, which is numerically equal to the number of throats connected to a pore. Figure 7 shows the relationship between the pore number and the pore coordination number before and after modification. The number of pores with coordination numbers less than 10 was 2591 (98% of the total) before modification and 9427 (94%) after modification. The maximum pore coordination number increased from 22 before modification to 29 after modification. The number of connected pores increased and that their connectivity improved due to cyclical microwave modification.
2.4. Change in Surface Groups on Anthracite and Minerals in Anthracite. The surface groups and minerals can affect the gas seepage behavior by influencing the adsorption/ desorption of methane and pore structures in anthracite. The chemical bonds in sulfur-containing groups such as mercaptans (−SH), thioethers (−S−), and thiophenes (−C 4 H 4 S) in the coal macromolecular structure break when they resonate with the electromagnetic microwaves, and some alkyl side chains and oxygen-containing functional groups in coal are pyrolyzed and released as gas. 65 Figure 8 shows the FTIR results for surface groups on anthracite samples before and after cyclical microwave modification. The peaks near 2925 and 2855 cm −1 are identified as the stretching vibrations of −CH 2 and −CH 3 , respectively. 67,68 The peaks near 2515 and 460 cm −1 are attributed to the vibrations of S−H. 69 The peak at 1600 cm −1 corresponds to the vibrations of CO and CC. 70 The peak at 1430 cm −1 is identified as the bending vibration of −CH 3 and the antisymmetric stretching vibration of carbonate groups. 32 The peak near 1030 cm −1 is attributed to the stretching vibrations of Si−O−Si and Si−O−C. 33 The peak near 540 cm −1 corresponds to the vibrations of S−S. 33 The adsorption peaks near 2925, 2855, and 1430 cm −1 are slightly smaller for modified samples than unmodified samples, indicating that some of the methyl and methylene groups were removed and the adsorption capacity of coal was weaker. 71 The decrease in CH 4 adsorption indicated that the gas seepage performance in coal was improved because of the coal matrix shrinkage effect. 72,73 The intensities of absorption peaks near 2515, 1030, 540, and 460 cm −1 decreased notably after modification of the samples, indicating that microwave radiation broke some sulfur-containing bonds and decomposed some minerals, such as sulfur, carbonate, and silicate. The decomposition of minerals would increase the pores in anthracite. This phenomenon is also observed in the micronscale pores and fractures shown in Figure 4 (zone B).
APPLICATION AND SIGNIFICANCE
Microwaves can heat the reservoir rapidly within the distance that microwaves penetrate. The penetrability and controllability of microwaves make their heating efficiency higher and more convenient than traditional conduction heating. More importantly, the whole modification process causes limited pollution of the environment. Therefore, in the field of engineering applications, the cyclical microwave modification method can not only inhibit methane adsorption and accelerate methane seepage but also make a significant contribution to environmental protection. Figure 9 shows a schematic diagram of the method for accelerating CBM extraction by microwave radiation combined with water injection. Microwave treatment and water injection are performed alternately by the system. High-pressure steam produced by rapid vaporization of water by microwaves can cause cracks to grow rapidly and relieve water locking damage caused by water injection. The cyclical temperature impact makes the coal reservoir expand and contract repeatedly, which is conducive to the development of pores and fractures. Yang and Liu 15 used experiments and modeling to study the changes in the pore structure of coal that were induced by cyclic nitrogen injections, they found that the total volume of mesopores (2−50 nm) and macropores (>50 nm) increased with cryogenic treatment, while the growth rate of pore volume decreased with increasing numbers of freeze−thaw times.
CONCLUSIONS
(1) For accelerating CBM extraction, the influence of cyclical microwave modification on the apparent permeability of anthracite in Sihe mine, China, was studied, and the change in microscale pore structures, surface groups, and minerals in anthracite before and after microwave modification was measured.
(2) With the increase in cyclical microwave modification times, the apparent permeability of anthracite increased continuously due to the continuous increase in the quantity and connectivity of micron-scale pores.
(3) The gasified release of alkyl side chains on anthracite surface reduced the CH 4 adsorption capacity, and the decomposition of mineral materials in anthracite increased the micron-scale pores. (4) The changing law of anthracite apparent permeability after three times of single power microwave modification has been researched. The effect of cyclical microwave modification on various metamorphic degree coals with different electric powers and different cycles will be further studied to achieve more parameters for in situ engineering application. Table 1.
Experimental Apparatus.
The tests of apparent permeability of the anthracite samples before and after cyclical microwave modification were conducted using a self-built permeability-testing platform. Figure 10 shows the schematic of the experimental apparatus. The device is mainly composed of a vacuum-pumping system, a confining pressure-loading system, an inlet/outlet system, and a measuring system. Before the test, the coal samples and the porous metal gaskets at both ends were placed together in the sealing sleeve and were sealed between the base and the axial piston with two sealing rings.
The cyclical microwave modification experiment of the anthracite samples was carried out in a P70F20CL-DG(B0) microwave oven produced by Guangdong Galanz company. The dimensions of the resonator in the oven were 180 mm high, 315 mm wide, and 329 mm deep. The frequency of the microwave oven was 2.45 × 10 9 ± 50 Hz. The maximum power was 700 W and could be adjusted.
The micro-CT scanning of the anthracite samples was conducted with a nanoVoxel-4000 open tube reflective high penetration CT system produced by Sanying Precision Instruments Co., Ltd. The scanning voltage was 150 kV, the scanning current was 150 μA, the exposure time was 3.5 s, and the spatial resolution was 19.6 μm. Sixteen-bit images with 3000 × 3000 × 3000 voxels were obtained after the whole sample was scanned and processed.
The analysis of the surface groups of the anthracite samples was performed using a Nicolet iS5 FTIR instrument produced Figure 11. Schematic of the cyclical microwave modification experimental procedure. by Thermo Fisher company. The detection spectral range of the instrument was 350−7800 cm −1 . The resolution and accuracy of the instrument were better than 0.5 and 0.01 cm −1 , respectively, and the signal-to-noise ratio was 40,000:1. Before the test, the dried fine anthracite particles and potassium bromide (KBr) were fully ground in an agate bowl at a ratio of 1:150 wt %; then, this powder was loaded into a mold, and tablets were pressed under a pressure of 10 MPa. 5.3. Experimental Process. Figure 11 shows the schematic of the cyclical microwave modification experimental procedure. Before the experiment, the powdered anthracite samples were dried to a constant weight in a vacuum-drying oven at 373−378 K. First, the micro-CT scanning and permeability measurements of the cylindrical raw anthracite sample were carried out. At the same time, the infrared spectrum of anthracite powder was measured. Second, the sample was irradiated in the microwave oven for 120 s with a power of 700 W. Then, the micro-CT scans, seepage measurements, and FTIR measurements were repeated with the modified sample. According to this procedure, the experiment was completed after three microwave modification cycles. The temperature of the sample was detected using an AS852B infrared thermometer produced by Smart sensor company. The temperature detection range was −50−750°C with an accuracy of ±2%, a resolution of 0.1°C, and a response time of 500 ms.
where k a is the apparent permeability of the anthracite sample (10 −3 μm 2 ), q is the flow rate of the CH 4 (cm 3 /s), μ is the dynamic viscosity of the CH 4 at a pressure of (P 1 + P 2 )/2 (mPa· s), L is the length of the anthracite sample (cm), A is the crosssectional area of the anthracite sample (cm 2 ), P 0 is the standard atmospheric pressure (MPa), P 1 is the methane pressure at the inlet (MPa), and P 2 is the methane pressure at the outlet (MPa). The confining pressure was set at 2 MPa, and the outlet pressure was set at atmospheric pressure. The inlet pressure was set at 0.5, 0.9, 1.5, 1.9, and 2.5 MPa, respectively. Effective stress is the difference between the confining stress and the average pore fluid pressure 70 where σ e is the effective stress (MPa), σ c is the confining stress (MPa), P is the average pore fluid pressure (MPa), and α is the effective stress coefficient (dimension-less, approximate to 1). The relationship between the effective stress and confining pressure, inlet pressure, and outlet pressure is listed in Table 2. The calculation method is consistent with Li et al. 75 5.5. Image Processing. The 3D images before and after cyclical microwave modification were reconstructed to visualize and analyze the changes in the internal pore-fracture characteristics. The brightness and contrast of the images were adjusted to the appropriate ranges. Then, processes for denoising and enhancement were carried out, and edge detection and binarization segmentation were conducted. The coal matrix, minerals, and pores were divided by interactive thresholding combined with an interactive top-hat transform.
The 3D-REV is a small cube that can reflect the pore structure of the whole coal sample. More petrophysical properties of coal samples can be reflected when more voxels are contained in the 3D-REV. However, the selected 3D-REV cannot be too large due to the limited computing capacity. 76 Figure 12 shows the change in porosity of the cube with the change in side length (voxels) before and after cyclical microwave modification with point 1 (800, 800, 800), point 2 (1500, 1500, 1500), and point 3 (2200, 2200, 2200) as central points. When the side length was greater than 280 voxels, the porosity tended to be a certain value and was approximately equal to the porosity of the whole sample. In this study, a cube with point 2 (1500, 1500, 1500) as the center point and a side length of 500 voxels (9.8 mm) was selected as the 3D-REV, as shown in Figure 13. The 3D-REV was divided into 500 layers along the vertical Z-axis direction, and the porosity of each XY section was calculated to observe the internal pore structure of the 3D-REV more accurately, as shown in Figure 14. The total pores and connected pores of all XY planes increased irregularly after cyclical microwave modification, which was caused by the heterogeneous distribution of molecular structures, water, and minerals in the coal. The PNM was used to quantitatively characterize the pore and throat parameters before and after cyclical microwave modification. The PNM uses regular shapes to characterize the complex space in coal or rock. In this model, the maximal inscribed sphere algorithm was used to idealize the connected pores into two parts: pores and throats. Figure 15 shows the PNM of the 3D-REV before and after cyclical microwave modification. The calculation method was consistent with Silin and Patzek, 77 Al-Kharusi and Blunt, 78 Ngom et al., 79 Lin et al., 66 | 2021-06-20T05:22:27.271Z | 2021-05-28T00:00:00.000 | {
"year": 2021,
"sha1": "869c88266fa8e516a1091220bc88d9d8b0da2d38",
"oa_license": "CCBYNCND",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.1c01122",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "869c88266fa8e516a1091220bc88d9d8b0da2d38",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248927122 | pes2o/s2orc | v3-fos-license | Postharvest Treatments on Sensorial and Biochemical Characteristics of Begonia cucullata Willd Edible Flowers
Edible flowers (EFs) are currently consumed as fresh products, but their shelf life can be extended by a suitable drying technique, avoiding the loss of visual quality and valuable nutraceutical properties. Begonia cucullata Willd is a common ornamental bedding plant, and its leaves and flowers are edible. In this work, B. cucullata red flowers were freeze-dried (FD) and hot-air dried (HAD) at different temperatures. To the best of our knowledge, our study is the first one comparing different drying methodologies and different temperatures involving sensory characterization of EFs; therefore, a codified method for the description of the sensory profile of both fresh and dried B. cucullata was developed and validated. Phytochemical analyses highlighted the better preservation of antioxidant compounds (polyphenols, flavonoids and anthocyanins) for flowers dried at 60–70 °C. Visual quality was strongly affected by the drying treatments; in particular the color of the HAD samples significantly turned darker, whereas the FD samples exhibited a marked loss of pigmentation. Although all drying conditions led to a reduction in the hedonic indices if compared with fresh flowers, the best results in terms of organoleptic properties were obtained when the drying temperature was set to 60 or 70 °C.
Introduction
Edible flowers (EFs) are traditionally consumed in many regions of the world, since their ornamental value, flavors and tastes have been appreciated for thousands of years [1]. Begonia cucullata Willd (syn. Begonia semperflorens Link & Otto; common English name: wax begonia) is one of the most common species [2], currently cultivated and consumed as EFs. This plant (Begoniaceae family) is native to South America, and it has naturalized elsewhere in tropics and subtropics [2,3]. B. cucullata has been produced as ornamental bedding plant for long time, and new hybrids and varieties were frequently created to meet the market requirements [4]. Currently, its use as EFs has been documented in various parts of the world [2,5]. Wax begonia flowers are characterized by several colors (scarlet, red, rose, white), a pleasant crispy texture and slightly lemon-like taste with a mild bitter aftertaste [1,2]. B. cucullata EFs are also a source of several secondary metabolites (mainly France Etuves, Chelles, France) for 20.5, 16, 11 and 3 h, respectively. The hot-air drying process was concluded when flowers dry weight (DW) remained unchanged over time, and the moisture was less than 10%. Weight loss percentage was calculated as follows: [(FW-DW) × 100]/FW. Hot-air drying was carried out at the Chambre d'Agriculture des Alpes-Maritimes (CREAM, Nice, France), whereas descriptive sensory evaluation and biochemical analyses on fresh, freeze-dried and hot-air dried flowers were performed at the Department of Agricultural, Food and Environment (DAFE), University of Pisa.
Color Parameters
The color of the different flowers was measured according to the Commission Internationale de l'Eclairage CIE L*a*b* Color System by means of a tristimulus colorimeter (Eoptis, Mod. CLM-196 Benchtop, Trento, Italy). The analysis was performed on flowers lying on an area of approximately 24 cm 2 and each sample was analyzed in triplicate. The color was defined on the base of the chromatic coordinates [19], lightness (L*), green-red (a*) and blue-yellow components (b*). The Chroma value C*, which is an expression of color saturation, was also used to evaluate the color, and calculated by the relation: The color difference among samples was expressed as ∆E * ab :
Descriptive Sensory Evaluation
Sensory profiles of the flowers were determined by a descriptive analysis by a panel of trained assessors (10 assessors, 6 females and 4 males, aged between 23 and 60 years) from the "expert panel" of the DAFE of the University of Pisa. The DAFE internal procedure for assessor selection and the training was applied as previously reported [20]. A specific training session was organized before the beginning of the specific tasting sessions, with the aim of defining the specific method of the sensory evaluation of flowers. All of the trained panelists were firstly involved in a consensus panel specifically aimed at generating descriptors and their definitions. A final set of 41 descriptive parameters, including both quantitative and hedonic attributes, was individuated by agreement among panelists, and an innovative sensory wheel specific for the tasting of flowers was developed ( Figure 1).
Biochemical Analyses
Total polyphenols, flavonoids and anthocyanins content, as well as the flowers' antioxidant activity, the 2,2-diphenyl-1-picrylhydrazyl radical (DPPH) assay and Ferric ion Reducing Antioxidant Power (FRAP) assays, were determined prior to extraction of fresh (150 mg) and dried (20 mg) flowers in 2 mL of 70% (v/v) methanol solution, as already described [18]. The Folin-Ciocalteu method, according to Singleton and Rossi [21], was followed to quantify the flowers' total polyphenolic content (TPC). Sample absorbance was read at 765 nm (Ultraviolet-Visible spectrophotometer, SHIMADZU UV-1800, Shimadzu Corp., Kyoto, Japan), and the results were expressed as mg gallic acid equivalent (GAE) per g FW or DW. Total flavonoids content (TFC) was determined as reported in Kim et al. [22]. The absorbance was read at 510 nm and the concentration was expressed as mg of (+)-catechin equivalents (CE) per weight of samples. The determination of the DPPH scavenging activity was performed according to Brand-Williams et al. [23], and the data were expressed as mmol Trolox Equivalent Antioxidant Capacity (TEAC) for weight of samples. The FRAP assay was performed according to Szôllôsi and Szôllôsi Varga [24], and the results were reported as mmol FeSO 4 /g FW or DW. The total soluble sugars (TSS) were extracted as reported in Das et al. [25], starting from 150 or 20 mg of fresh and dried flowers, respectively. TSS were spectrophotometrically estimated as described in Marchioni et al. [18], and data were reported as mg glucose per g FW or DW.
Total acidity (TTA) was determined by acid-base titration, according to Li et al. [26], with some modifications. Briefly, 50 mg of sample was mixed in water (solid/liquid extraction, ratio 2:1, w/v) and sonicated for 15 min at room temperature. After paper filtering, the extract was then titrated with NaOH 0.01 N, using 1% phenolphthalein as an indicator and the TTA was expressed as meq of citric acid/g of sample. The pH of the extract was determined with a pH-meter (pH 80+ DHS, XS Instrument, Modena, Italy).
Statistical Analysis
Biochemical results were statistically analyzed by one-way analysis of variance (ANOVA) (Past3, version 3.15), using either Tukey Honestly Significant Difference (HSD) or the Mann-Whitney U test according to the variance homogeneity (Levene test), with a cut-off significance of p < 0.05 (letters).
Two-way ANOVA and choosing samples and panelists as main factors after processing the results by Big Sensory Soft 2.0 (ver. 2018) was applied for sensory data.
Colorimetric Analysis
The color represents one of the most important characteristics with respect to the postharvest handling of EFs, since it affects the enjoyment of eating [27]. In particular, red flowers are able to stimulate appetite [28]; thus, the proper postharvest treatment should maintain this feature as much as possible to preserve B. cucullata attractiveness. The flowers appearance after the different treatments are shown in Figure 2. Two-way ANOVA and choosing samples and panelists as main factors after processing the results by Big Sensory Soft 2.0 (ver. 2018) was applied for sensory data.
Colorimetric Analysis
The color represents one of the most important characteristics with respect to the postharvest handling of EFs, since it affects the enjoyment of eating [27]. In particular, red flowers are able to stimulate appetite [28]; thus, the proper postharvest treatment should maintain this feature as much as possible to preserve B. cucullata attractiveness. The flowers appearance after the different treatments are shown in Figure 2.
As observed by Zhang et al. [29], the color of the samples significantly changed because of the drying process, turning darker, with the exception of FD samples, in which a loss of pigmentation occurred ( Figure 2 and Table 1). As observed by Zhang et al. [29], the color of the samples significantly changed because of the drying process, turning darker, with the exception of FD samples, in which a loss of pigmentation occurred ( Figure 2 and Table 1). Table 1. CIE L*a*b* color parameters of the B. cucullata flowers subjected to different postharvest treatments. Abbreviation: FD-freeze-dried; HAD-hot-air dried. [27], red rose and carnation [30]. Redness parameter a* appeared significantly reduced in all dried samples, if compared with fresh flowers. This is true especially for the sample treated at 80 • C, showing how the highest drying temperature led to more serious color degradation. In addition to a* values, HAD treatments also induced a reduction in the bluish parameter, confirming what observed by Siriamornpun et al. [27] on T. erecta flowers. These authors suggest that the reduction of those color coordinates was probably due to both non-enzymatic browning reactions and the destruction of pigments in the petals.
Chroma significantly changed as a consequence of the drying processes carried out, taking on a reduced saturation and an opaque appearance with the increasing temperature applied ( Table 1).
As outlined by the distance between the chromatic coordinates (∆E * ab ) ( Table 2), all samples could be visibly discriminated in color when compared to each other, in accordance with the above-discussed results. Starting from recent findings on this topic for different EFs [31,32], in the future, our investigations could be made on the pretreatments of flowers before drying to reduce the color depletion compared to the fresh ones.
Descriptive Sensory Evaluation
Organoleptic performance, flavor and overall impression are pivotal to evaluate the quality of flowers intended for culinary uses [33]. Despite the favorable results on their content of bioactive compounds, there is little information on consumers organoleptic preference of EFs, and their sensory characterization has been limited only to a small amount of species so far in the literature [33,34]. In addition, whereas the use of pretreatments, such as drying, as a preservation technique to extend EFs shelf life, can deeply affect their overall quality, and, as a consequence, their acceptability [31,32], to the best of our knowledge, no data are available about the impact of different drying techniques on sensory expression of EFs.
During panel tests all the parameters selected and reported in Figure 1 were addressed and evaluated by the judges. For clarity of exposure, in Figure 3a-c, only sensorial parameters that showed significant differences among treatments were reported and further discussed. At a visual level, the higher the temperature utilized for drying, the higher the changes were, as reported by panelists, with a significant reduction of brightness and uniformity of color together with the physical integrity, whereas the bunching appears significantly increased when the drying temperature reached 70 and 80 • C. When the drying temperature reaches and exceeds 60 • C, there is a significant increase in Complexity of smell together with the characters of Fruity and Overripe. On the other hand, the Sour and Bitter tastes increased significantly when the drying temperature reached 80 • C. When a new technical process in food production is explored, the level of hedonic quality, expressed by the obtained products, is fundamental in determining its consumer acceptability [35]. Therefore, some hedonic parameters related to view, smell, taste and overall pleasantness were also evaluated ( Figure 4) to obtain preliminary information about the impact of the treatments used on the organoleptic acceptance of the different samples. In this context, Benvenuti et al. [34] showed the dependence on different personal taste from the evaluation of B. cucullata, due to its particular acidic taste, even comparable to lemon taste perception. Through the overall hedonic index (Figure 4), calculated by the means of the values attributed during panel tests to each hedonic parameter, and converted on a scale from 0 to 10 (Equation (3), it was possible to evaluate the whole organoleptic quality of all the flowers' objects of the research.
Overall hedonic index = [ ] * 1,11 When a new technical process in food production is explored, the level of hedonic quality, expressed by the obtained products, is fundamental in determining its consumer acceptability [35]. Therefore, some hedonic parameters related to view, smell, taste and overall pleasantness were also evaluated ( Figure 4) to obtain preliminary information about the impact of the treatments used on the organoleptic acceptance of the different samples. In this context, Benvenuti et al. [34] showed the dependence on different personal taste from the evaluation of B. cucullata, due to its particular acidic taste, even comparable to lemon taste perception. Through the overall hedonic index (Figure 4), calculated by the means of the values attributed during panel tests to each hedonic parameter, and converted on a scale from 0 to 10 (Equation (3), it was possible to evaluate the whole organoleptic quality of all the flowers' objects of the research.
Overall hedonic index = MEAN[Hedonic indices] * 1, 11 As shown in Figure 4, fresh flowers had the highest overall hedonic index, suggesting their potential greater consumers' acceptability over dried flowers. Nevertheless, even if all drying conditions led to a reduction in the hedonic indices of flowers, the best results, in terms of organoleptic properties, were obtained when the drying temperature was set up at 60 or 70 • C.
To the best of our knowledge, very few studies were performed on EFs descriptive sensory evaluation, involving only fresh flowers [13,33,34,36]. Earlier reports were mainly consumers' preference surveys, in which flowers' overall quality and colors were evaluated for some well-known species, such as pansy (Viola × wittrockiana Gams) [37], viola (Viola tricolor L.), borage (Borago officinalis L.) and nasturtium (Nasturtium officinalis L.) [28,38]. Until now, only one work has compared EFs postharvest treatment, also from a sensory point of view, evaluating gamma-irradiated Bauhinia variegata L. flowers with three different doses [39]. Therefore, our study is the first one comparing different drying methodologies and different temperatures involving sensory characterization of EFs. As shown in Figure 4, fresh flowers had the highest overall hedonic index, suggesting their potential greater consumers' acceptability over dried flowers. Nevertheless, even if all drying conditions led to a reduction in the hedonic indices of flowers, the best results, in terms of organoleptic properties, were obtained when the drying temperature was set up at 60 or 70 °C.
To the best of our knowledge, very few studies were performed on EFs descriptive sensory evaluation, involving only fresh flowers [13,33,34,36]. Earlier reports were mainly consumers' preference surveys, in which flowers' overall quality and colors were evaluated for some well-known species, such as pansy (Viola × wittrockiana Gams) [37], viola (Viola tricolor L.), borage (Borago officinalis L.) and nasturtium (Nasturtium officinalis L.) [28,38]. Until now, only one work has compared EFs postharvest treatment, also from a sensory point of view, evaluating gamma-irradiated Bauhinia variegata L. flowers with three different doses [39]. Therefore, our study is the first one comparing different drying methodologies and different temperatures involving sensory characterization of EFs.
Biochemical Analyses
In recent time, worldwide consumers' demand for EFs increased due to their phytochemical content with healthy properties [9]. A primary goal of postharvest treatment (e.g., drying methods) is the maintenance of the highest phytonutritional content. Therefore, some primary and secondary metabolites were quantified in red B. cucullata fresh, freeze-dried (FD) and hot-air dried (HAD) flowers (Table 3). Table 3. Determination of anthocyanins, total phenolics, total flavonoids, antioxidant activity (DPPH and FRAP assay), total soluble sugars (TSS) and pH and total acidity (TTA) in B. cucullata flowers subjected to different postharvest treatments. Data are expressed as mean ± standard deviation (n = 3). Within drying treatments, different letters mean significant differences using Tukey HSD or the Mann-Whitney U tests, with a cut-off significance of p < 0.05. Figure 4. Overall hedonic indices of dried flowers. Abbreviation: F-fresh flowers; FD-freeze-dried.
Biochemical Analyses
In recent time, worldwide consumers' demand for EFs increased due to their phytochemical content with healthy properties [9]. A primary goal of postharvest treatment (e.g., drying methods) is the maintenance of the highest phytonutritional content. Therefore, some primary and secondary metabolites were quantified in red B. cucullata fresh, freeze-dried (FD) and hot-air dried (HAD) flowers (Table 3). Table 3. Determination of anthocyanins, total phenolics, total flavonoids, antioxidant activity (DPPH and FRAP assay), total soluble sugars (TSS) and pH and total acidity (TTA) in B. cucullata flowers subjected to different postharvest treatments. Data are expressed as mean ± standard deviation (n = 3). Within drying treatments, different letters mean significant differences using Tukey HSD or the Mann-Whitney U tests, with a cut-off significance of p < 0.05. Fresh flowers are characterized by a good amount of phenolic compounds and antioxidant activity, despite their high water content. In fact, regardless to the drying methods and the temperature applied, flower weight loss was higher than 90% (data not shown). This percentage was comparable in all treatments, varying between 93.2 (HAD 70 • C) and 95.6% (HAD 80 • C) (data not shown). Total phenolics content (TPC) was comparable to Chensom et al. [7] and Traversari et al. [40], even if we highlighted a higher antioxidant activity (3.43 mmol TEAC/g FW). Despite the same color, total anthocyanins were significantly higher in our samples, if compared with the ones of Traversari et al. [40].
Fresh Flowers
Regarding dried flowers, the highest content of total polyphenolic compounds was detected in FD and HAD 50 • C samples, followed by HAD 60 and 70 • C. The highest tested temperature led to the most significant decrease of these metabolites. A similar trend was also observed for the total flavonoids content (TFC), even if slight differences between FD and HAD flowers were detected. Similarly to TPC, TFC decreased significantly with increasing temperature, reaching the lowest amount at 70 and 80 • C (Table 3). In the literature, several works highlighted the reduction of TPC and TFC when flowers were dried with high temperatures, and FD is identified as a better drying method than HAD [14,[41][42][43]. Nevertheless, our results showed a good amount of total phenolics still guaranteed up to 70 • C, in agreement with that observed in Camelia sinensis flowers (comparable TPC between FD and HAD 60 • C samples) [43]. This is probably also due to an inactivation of polyphenol oxidases (PPO) that occurred at temperatures higher than 50 • C, which prevents their oxidation [42]. This hypothesis was supported also by Tan et al., 2015 [44] and Loh and Lim, 2017 [45], who reported very low residual PPO activity (<3%) in Morus alba and Persea americana leaves, respectively, after drying treatment at 50 • C for few hours (4-5 h). The study of the activity of enzymes that oxidize phenolic compounds is beyond the aims of this work. Further investigation on the topic should be performed to elucidate the physiological responses of the drying process in B. cucullata flowers, also evaluating enzymes stability at different temperatures. Peroxidase (POD) should be also taken into account, since it is considered a most heat-stable vegetable enzyme, able to oxidize phenolic compounds, also leading to negative flavor changes during storage [46]. Nevertheless, the drying temperature tested in our work should be enough to significantly decrease POD activity, on the basis of the literature data [47].
In addition to drying time and temperature, water activity (WA) should be also taken into account for further investigation, since those parameters could synergistically affect enzymatic inactivation during the drying process [48].
On the other hand, TPC reduction observed in HAD B. cucullata flower at 80 • C was probably linked to their degradation at high temperatures [49].
The highest amount of total monomeric anthocyanins was detected in HAD 60 and 70 • C flowers, followed by the ones dried at 50 • C, 80 • C and FD. Despite anthocyanins being heat sensible, the shorter time of exposure at 70 • C than 50 • C (respectively 11 and 20.5 h) causes less damage to this class of metabolites. Similar behavior was observed in lilac Bletilla striata flowers [50]. Surprisingly, 80 • C and FD shared the lowest content of anthocyanins. A temperature of 80 • C could be too high for anthocyanins, whereas FD treatment could led to higher pH values. To validate this hypothesis, further investigations are required to explain the loss of color in FD begonia flowers (Figure 2) (e.g., identification of flowers red pigment and their kinetic of degradation).
FD is a drying technique known to maintain an unaltered and high quality of food products, able to prevent chemical decomposition due to its low processing temperature and lack of oxygen [17]. FD begonia flowers showed the most remarkable radical scavenging activity (DPPH assay), a parameter which was almost halved in HAD 80 • C samples. These results were confirmed by the ferric ion reducing antioxidant power (FRAP) assay, where the data showed the same trend. Among HAD samples, no substantial differences were highlighted in begonia flowers dried from 50 to 70 • C.
Taken together, HAD flowers at 60 and 70 • C could be a fairly fast drying method, that would allow to obtain B. cucullata dried flowers with high amount of phenolic compounds, anthocyanins included, and good antioxidant activity.
The highest amount of total soluble sugars and acidity were quantified in HAD flowers at 70 and 80 • C, whereas a progressive decrease was observed for both parameters at lower temperatures (Table 3). With regards to sugars, contrasting data are present in the literature. Our results are similar to those obtained in chrysanthemum flowers, with a higher amount of soluble sugars detected in oven dried samples at 65 and 75 • C than in those oven dried at 55 • C [51]. Contrary to our observation, Park et al. [48] and Marchioni et al. [18] reported that 50 • C is the proper temperature to retain carbohydrates in Agastache rugosa and A. aurantiaca flowers, respectively. Interestingly, A. rugosa HAD flowers at 50 • C showed the highest content of sucrose [52], which is the soluble sugar mainly perceived as sweet to the human palate [53]. Despite no specific compound being identified in our work, no significant differences on sweet taste were observed in begonia flowers at any temperature tested.
To the best of our knowledge, very few data are available on HAD flowers and titratable acidity (TTA). Fernandes et al. [54] investigated the TTA in freeze-dried and HAD (50 • C) Centaurea cyanus flowers. The results cannot be easily compared with ours, since completely different drying cycles were applied (a few hours vs. 20.5 h). On the other hand, Park et al. [52] observed that oven-dried flowers at 80 • C were significantly higher in tricarboxilic acid cycle (TCA) intermediates (citric acid, fumaric acid and succinic acid) than FD and other oven-dried samples. Sour intensity is mainly determined by the presence of organic acids, such as citric acid [49], and our results could be in agreement with those observations. In fact, we observed higher TTA in flowers treated at the highest temperatures (70 and 80 • C), together with the perception of sour taste (Figure 3c and Table 3).
As it is known in the literature, phenolic compounds, mainly flavonoids, are responsible for the bitterness of plants product [55], and these bioactive compounds showed dramatic changes at the beginning of the drying process, probably due to unsteady states of heat and mass transfer simultaneously [43]. Fernandes et al. [56] showed a clear correlation between flowers tastes and bioactive compounds in their petals. Drying cycles of different lengths should be tested in future works, in order to check and avoid the molecules responsible for the bitter taste.
Conclusions
The consumption of EFs such as B. cucullata could be a valuable solution to enrich diets with bioactive compounds (i.e., polyphenols, flavonoids and anthocyanins). It is indeed well known that following a well-balanced diet protects against malnutrition in all its forms, as well as noncommunicable diseases, such as diabetes, heart disease, stroke and cancer. Fresh EFs bring the highest content of healthy molecules; however, they have a reduced shelf life, and thus they are available on the market only for short periods and in a limited distribution area close to the producing site. Therefore, the main purpose of the research was to investigate the effect of different drying procedures (freeze-drying and hot-air drying at 50, 60, 70, 80 • C) in order to select the best working conditions suitable to maximally improve the EFs stability, and in the meantime, avoiding or at least reducing the loss of visual quality, sensory and nutraceutical properties, without the addition of preservatives or other additives. Moreover, our study is the first one comparing different drying methodologies and different temperatures involving sensory characterization of EFs; therefore, a codified method for the description of the sensory profile of both fresh and dried B. cucullata was developed and validated.
Visual quality was strongly affected by the drying treatments, in particular the color of the HAD samples significantly turned darker, whereas the FD samples exhibited a marked loss of pigmentation.
As expected, most of sensorial parameters as well as the overall organoleptic profile are deeply affected by the drying method. However, in the experimental conditions adopted, the higher overall organoleptic quality in dried flowers was observed when the drying temperature was set up at 60 or 70 • C.
To conclude, the consumption of dried edible flowers leads to developing new distribution channels, and the description of sensorial features, along with the preservation of phytochemical compounds, are required to define their quality. Biochemical results suggest that medium-high temperatures could be used to obtain dried B. cucullata flowers rich in molecules with healthy value. According to our results, B. cucullata flowers showed a good conservation of sensorial and phytochemical features at drying temperatures of 60 and 70 • C. | 2022-05-21T15:20:58.576Z | 2022-05-01T00:00:00.000 | {
"year": 2022,
"sha1": "9ea625592459fd16976d0d68b57ef1bee179f949",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2304-8158/11/10/1481/pdf?version=1652968788",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4042d6f5e0d0d88d35780cb4f1b2df18b98352d7",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
11457582 | pes2o/s2orc | v3-fos-license | Opto-thermal analysis of a lightweighted mirror for solar telescope
In this paper, an opto-thermal analysis of a moderately heated lightweighted solar telescope mirror is carried out using 3D finite element analysis (FEA). A physically realistic heat transfer model is developed to account for the radiative heating and energy exchange of the mirror with surroundings. The numerical simulations show the non-uniform temperature distribution and associated thermo-elastic distortions of the mirror blank clearly mimicking the underlying discrete geometry of the lightweighted substrate. The computed mechanical deformation data is analyzed with surface polynomials and the optical quality of the mirror is evaluated with the help of a ray-tracing software. The thermal print-through distortions are further shown to contribute to optical figure changes and mid-spatial frequency errors of the mirror surface. A comparative study presented for three commonly used substrate materials, namely, Zerodur, Pyrex and Silicon Carbide (SiC) is relevant to vast area of large optics requirements in ground and space applications.
Introduction
Large telescopes provide higher spatial resolution and improved sensitivity to probe exotic and unknown physical processes taking place in the Universe. Since last four hundred years, the telescope aperture size has gradually increased, doubling almost in every 50 years period [1]. For night time astronomy, this impressive growth will continue in the new millennia as the work on few notable 20-40 m class giant telescope projects is already in progress [2]. The evolution of solar telescope, on the other hand, has followed a different trajectory [3]. The immense difficulty in managing the thermal loads and internal seeing caused by radiative flux has been the major hurdles in building large solar telescopes. The technology has now advanced to a stage where building large aperture solar telescopes is well within the reach. Efforts from various groups are now underway to build next generation 2-4 m class solar telescopes. Some notable solar facilities that are currently in different phases of development include: a 4m Advanced Technology Solar Telescope (ATST) to be build by the National Solar Observatory, USA [4]. The 4m European Solar Telescope (EST), under considerations from European Association for Solar Telescopes [5]. A 2m class National Large Solar Telescope (NLST) proposed by the Indian Institute of Astrophysics, India [6]. These telescopes will be equipped with adaptive optics and other state-of-the-art scientific instruments to facilitate diffraction-limited observations of the small scale features (∼50 km) of the solar atmosphere. Meanwhile, the 1.6 m New Solar Telescope (NST) at Big Bear Solar Observatory, USA and the 1.5 m Gregor telescope built by a consortia led Kiepenheuer-Institute, Germany, have started providing high quality ground based observations of the sun [7,8].
Evidently, one of the most challenging tasks is to carefully handle the excessive heat generated by large radiation flux incident on the primary mirror. Equally important is the choice of high quality, thermally stable and rigid material capable of maintaining the optical figure over a specified range of observing conditions. The thermal response of the solar mirror is influenced by radiative heating of the mirror caused by direct light absorption and the temperature imbalance with the surroundings. Mirror heating adversely affects the final image quality of the telescope in two ways, as is explained below.
First and foremost, it is hard for a massive mirror having large heat capacity and high thermal resistance to attain a temperature balance with the surrounding air. Under such conditions, the temperature difference between the mirror surface and the ambient air leads to refractive index fluctuations along the beam path. This 'mirror seeing' produces wavefront aberrations responsible for the image blurring. To preserve the image quality of the telescope, the effect of mirror seeing has to be minimized. This means the temperature difference between the mirror surface and the ambient air should not exceed beyond ±2 • C [9]. More crucially, the temperature uniformity across the mirror surface has to be maintained within ±0.5 • C. Several approaches which include, air conditioning [10], fanning the optical surface [11], ventilating mirror interior, resistive heating [12,13] have been proposed to regulate the mirror temperature for ensuring high quality astronomical observations. Laminar air flow across the mirror homogenizes the temperature by disintegrating the 'thermal plumes' closer to the surface. These methods also speed up the thermalisation process by improving the convective heat transfer rate between the substrate and the surrounding air.
Secondly, a poorly conducting mirror is differentially heated to create temperature inhomogeneities within the substrate volume. Thermally induced material expansion or contraction can significantly alters the optical quality of the telescope mirror. This problem can be eliminated by choosing mirror substrates made of ultra low expansion (ULE) material. In fact, the demand for high quality astronomical optics has led to the development of many attractive low expansion materials such as Zerodur (Schott), fused quartz, Cervit, ULE (Corning), Clearceram (Ohara) and Astro-Sitall etc. These ULE materials usually have high heat capacity and low thermal conductivity, implying they can retain a significant amount of heat, leaving the mirror prone to undesirable seeing effects and surface buckling under high thermal loads.
The severity of mirror heating problem scales adversely with the aperture size. To reduce the overall weight and large thermal inertia, the most preferred choice is to use lightweighted mirror structures that are easy to mount and do not deform significantly under gravity. Besides, the reduced mass also facilitates efficient and faster cooling of the mirror in harsh thermal conditions. For space applications, reducing the weight of the mirror has greater premium on the overall cost reduction of the payload.
The lightweight geometry is created by removing pockets of material from the blank without significantly compromising the rigidity and stiffness of the mirror [14]. The extreme lightweighting is usually achieved by mechanical milling and acid etching. The cool air, circulated through the rear pockets can remove the excessive heat from the mirror very efficiently. The pocketed cell geometry introduces structural inhomogeneities within the mirror blank. The top faceplate (reflecting surface) of the mirror gets the mechanical support from the grid of discrete ribs. The effect of underlying geometry of the cellular structure usually manifests in the form of rib print-through patterns a) during the mechanical polishing of the faceplate and b) periodic temperature residuals on the mirror faceplate operating in extreme thermal conditions. While polishing, the lapping tool experiences greater reaction force from the rib locations to wear out more materials thus creating periodic thickness variations on the faceplate. The wavefront errors related to polishing print-through are well studied and there are ways to mitigate them [15][16][17].
In this paper we have examined the impact of relatively less explored but nonetheless important 'thermal print-through' issue associated with the lightweighted structure operating in thermally dynamic environment. The thermo-optic analysis, as it is increasingly recognized, is a vital component of the overall system design and necessary to predict and verify the satisfactory performance of the mirror [18,19]. Here, we have used FEA to solve the time dependent 3-D heat transfer equation to predict the mirror deformation arising from the thermally induced stress and temperature inhomogeneities of a lightweighted structure. For realistic numerical simulations, the location dependent solar flux and ambient heating model is incorporated into FEA analysis. Thermal response of three types of mirror materials is studied at varying ambient temperature conditions that may exist at various observatory locations. Temperature induced surface deformations were analyzed using Zernike polynomials. For optical analysis, the Zernike data was imported into Zemax -a ray tracing software. A comparative study is carried out for three well accepted substrate materials in astronomical optics, namely, Zerodur, Pyrex and SiC.
Mathematical formulation
In order to study the primary mirror heating and the associated thermal stress/strain fields affecting the surface figure, each physical process should be identified. In the following we give a brief mathematical formulation of the heat transfer problem along with appropriate prescription for boundary and initial conditions.
Heat transfer
In a simplified form, the telescope mirror can be considered as a 3D circular disk whose front face is exposed to intense solar irradiance I(W/m 2 ). The thin metallic coating on the faceplate absorbs about 10%-15% of radiant flux which is converted into heat. The temperature distribution inside the mirror changes as the heat conducts from front surface to the rear side. The mirror also exchanges heat (via convection and radiation) with ambient air, which is at a temperature T a . A heat transfer equation can be derived based on the principle of conservation of energy, solution of which gives the temperature distribution T (x, y, z,t). The 2nd order partial differential equation relevant to the current studies for the conductive heat transfer in a solid can be expressed as [20] ∂ where T is the temperature, K = k/(ρC p ) is the thermal diffusivity of the system, C p is the heat capacity, k is the thermal conductivity, ρ is the density and q = Q/(ρC p ), where Q(x, y, z;t) represents the source term for heat. In the present case there is no internal heat source, i.e. Q(x, y, z;t) = 0 everywhere and at all instants of time. The heat flux enter the system through the surfaces, a fact which must be accommodated in the boundary condition. For solving the temperature T (x, y, z,t) we must assign: -the initial condition T (x, y, z;t = 0) = T i (x, y, x) and -the boundary condition that the net incident flux, which is balance between the incident and emitted fluxes, is transmitted into the substrate and accounts for the temperature gradients on the domain surface Ω. This condition is described by, where n is surface normal vector at any point on Ω. In Eq. (3) the l.h.s. denotes the heat inflow due to temperature gradient on the surface, q 0 is incident flux due to solar radiation, h (T − T a ) describes the convective heat flow to surroundings, where T a (t) is ambient temperature and h is the heat transfer coefficient. The term ε s σ R T 4 − T 4 a describes the radiative exchange with surroundings, where ε s is the emissivity of the surface and σ R is Stefan-Boltzmann constant.
Model prescription for initial temperature, heat flux and ambient temperature
Due to poor thermal conductivity and asymmetric heating, the initial temperature distribution T i (x, y, z) inside the mirror cannot be assumed to be uniform, i.e., independent of (x, y, z). In order to establish a definitive temperature field at t = 0, simulations were carried out by subjecting the mirror to several day-time heating and night-time cooling cycles [21]. The resulting temperature distribution was used as initial T i for all subsequent studies.
Surface boundary conditions for inward heat flux q 0
The boundary prescription for inward heat flux q 0 in Eq. (3) depends on the solar irradiance I(W/m 2 ) that varies between the sun-rise and the sun-set as [21] q 0 = γ I and (4) where γ is absorption coefficient (fraction of radiant energy that is absorbed and converted into heat) of the metallic film on the reflecting surface, I 0 (W/m 2 ) is irradiance amplitude when sun is at zenith, δ (rad) is declination angle of the sun, ϕ (rad) is latitude of the place and H = (π/12)t is solar hour angle and t is the local time. In Eq. (3), the heat loss or gain by free convection and radiation is driven by ambient temperature T a . For convenience, we can split the ambient temperature into two parts, i.e., T a (t) = T D (t) + T N (t), where T D (t) and T N (t) describe the day-time ambient heating and nighttime cooling, respectively. For cloud-free sky conditions, these two terms can be approximated using a simple model given by Gottsche and Olesen [22] T We use Eq. (6) and Eq. (7) as input to our FEA model to simulate a range of ambient temperature conditions typically existing at observatory locations. One such realization for 10 • C ≤ T a (t) ≤ 20 • C and solar flux I(W/m 2 ) is shown in Fig. 1. The assumed values for various parameters in Eq. (6) and Eq. (7) used to obtain a smoothly varying temperature curve in Fig. 1 are [21]: T 0 = 6.8 • C; T r = 13.4 • C; T d = −3.5 • C; ω = 12hrs (the half period oscillation); τ m = 14 : 00 hrs (the time when temperature maxima is reached); t s = 18 : 00 hrs (the time when temperature attenuation begins); κ = 3.5 hrs (the attenuation constant). The ambient temperature T a shown in Fig. 1 continues to rise well past the solar noon. This is due to thermal time lag in terrestrial environment that exists between the peak of solar flux and the peak of ambient temperature.
Structural deformation
The heat conducted to different parts of the mirror builds up thermal strain which results in structural deformation. For isotropic linear elastic solid, the thermal strain depends on coefficient of thermal expansion α(K −1 ), the instantaneous temperature T (t) and the stress-free reference temperature T ref as: If u, v, and w are the thermally induced deformation components in x, y and z direction, then the thermal strain can be completely specified in terms of u, v, and w and their derivatives. For small displacements, the constitutive equations relating the components of normal and shear strain to the deformation derivatives can be expressed as [23]: The displacements at the bottom boundary of the mirror were kept fully constrained, i.e., u = v = w = 0, to avoid the rigid body motion.
The mirror geometry
Among several possibilities of lightweighted geometry, the open-back mirror configuration is highly suitable for solar telescopes [24]. It consists of a thin faceplate fused to an array of ribs formed by scooped-out material from the back. The pocket geometry can be circular, triangular, hexagonal, square or any other combination of these shapes. The resulting structure has side walls/ribs of certain thickness that supports the thin reflecting faceplate of the mirror from behind. Besides the mechanical support, the rib walls also provide the thermal flow path for heat to conduct away from the faceplate. The presence of discrete grid makes the heat removal rate to vary across the faceplate thus creating temperature impressions resembling the cellular geometry of the lightweighted structure. Figure 2(a) shows the schematic of the parabolic mirror (conic constant =-1, focal length 4m) geometry chosen for the present studies. A similar design is also under consideration for the proposed NLST by India. The diameter of the mirror is 2m and the diameter of the inner hole is 0.42m. The honeycomb bottom comprising about 708 hexagonal cells is enclosed by 20 mm thick outer rim. The side length L of the hex-cell (inset image in Fig. 2(b)) is 34.6 mm and the wall/rib thickness t is 8 mm. The diameter D of the inscribed circle is 60 mm while the wall/rib depth varies from 148 mm for the outermost cells to 85 mm for the one at the center. The faceplate thickness is 12 mm. Compared to a solid geometry, the overall mass of the current lightweighted mirror is reduced by ≃ 68%. The main reason for choosing these hex-cell dimensions is because their structural stability has already been verified for the Gregor 1.5 m mirror telescope [25]. The mirror has a 6-fold symmetry about the optical axis as seen in the CAD design shown in Fig. 2(b). The symmetry property is utilized to reduce the model size and computational requirements by selecting only one section of the mirror in the FEA studies.
The FEA results
The governing equations and associated boundary conditions for the heat transfer and the structural deformation problem are outlined in Section 2. The analytical solution for Eqs. (1)- (3) and Eqs. (8)-(9) exists only for simple and regular shapes. In most practical cases where object geometry is highly complex, boundary conditions are complicated, material properties are temperature dependent and various physical parameters are coupled, problem then has to be solved by numerical methods. We used COMSOL Multiphysics as our FEA tool to solve the time-dependent 3D heat transfer and structural problem. The segregated solver in COMSOL takes advantage of the direct coupling between heat transfer and structural model. That is, for each time step (∆t = 100 sec), the heat transfer module first solves for the temperature distribution of the mirror over 24 hours. Each temperature field is then passed onto the the structural analysis module to perform the stress-strain analysis. No data transformation was necessary since a common mesh is applicable to both thermal as well as structural models in COMSOL. Fig. 2. The reflecting surface of the faceplate was modelled using a 100 nm thick aluminium coating on the faceplate. The FEA model was run on the 1/6th axisymmetric section of the mirror that was partitioned into 100644 tetrahedral mesh elements. The top surface of the mirror was continuously heated by time varying irradiated solar flux plotted in Fig. 1. The thermal and structural response was evaluated at three different ranges of ambient temperature T a i.e., 0 • C−10 • C, 10 • C−20 • C and 20 • C−30 • C. The front, back and the outer rim of the mirror were subjected to convective cooling with the heat transfer coefficient h = 5 W/m 2 representing natural cooling and h = 15 W/m 2 representing a moderately forced cooling. In addition, surface-to-ambient radiation condition was specified for the faceplate. Also the symmetry boundary condition i.e., k (∇T · n) = 0, was imposed on the side edges of the mirror. In the FEA model, the mirror heating was 'on' (Eq. (4)) between sunrise (6:00 hrs) and sunset (18:00 hrs) and 'off' for rest of the time. The standard values of the material parameters used in FEA studies are listed in Table 1 [21]. Figure 3(a) shows the temperature distribution within the mirror volume at 14:00 hrs when ambient temperature also reaches its peak. Apart from strong thermal gradients along axial direction, the appearance of periodic temperature print-through pattern on the faceplate is clearly evident. The temperature excess ∆T = T − T a between the mirror and the ambient air is plotted in Fig. 3(b). The temperature nonuniformities on the mirror surface are quantified by two key variations. First, the cyclic undulations δ T (the temperature print-through effect) along the radial line AA ′ which is caused by differential heat flow from the faceplate to the underlying hex-cell structure. Second, the trailing temperature ξ T (edge-effect) which denotes the overall temperature difference between the mirror surface and the edges. This difference arises due to more efficient cooling from the walls of inner hole and the outer periphery of the mirror. According to data shown for Zerodur in Fig. 3(b), the ξ T could be as high as 2 • C at noon. From the temperature distribution alone, we can loosely identify ξ T being associated with low order surface aberrations while δ T contributes to mid-spatial errors as explained in the next section. The maximal departure of ∆T max , δ T max and ξ T max for an ambient temperature range T a : 20 • C−30 • C for Zerodur, Pyrex and SiC material is listed in Table 2. From the Table 2, we may note that the amplitude of both ∆T and δ T depend crucially on the thermal conductivity of the material and the convective heat transfer coefficients used in simulations. A more efficient cooling mechanism is necessary, especially in Zerodur and Pyrex, to remove the excessive heat to minimize the mirror seeing effects.
Thermal print-through and temperature nonuniformities
A diurnal variations of δ T computed between two reference points located at the rib wall and the center of hex-cell is shown in Fig. 4. Even though the surface temperature is always above the ambient, the impact of ∆T and δ T are manifestly more pronounced during the noon T a range: 20 hours.
Thermally induced surface deformation
The non-uniform temperature field such as shown in Fig. 3(a), causes the mirror substrate to deviate away from its nominal defined shape. Thermally induced shape change can be highly irregular in lightweighted structures. The surface distortions for each temperature input field were computed by solving the FEM structural model. The reference temperature T ref for the undeformed mirror geometry is assumed to be 20 • C. The impact of lightweighted hex-cell geometry is clearly noticeable in the simulated image showing the surface deformation in Fig. 5(a) for Zerodur mirror. The corresponding axial displacements w along the radial line AA ′ of the mirror faceplate for different substrate materials are also plotted in Figs. 5(b)-5(d). The surface displacement w is positive in Pyrex and SiC substrates and gradually increases in radially outward direction. However, for Zerodur material, w can be both positive or negative because of zero crossing of the CTE around 25 • C. For a given temperature field, the surface undulations, represented by δ w, are not uniform over the entire mirror surface but exhibit same periodicity as the underlying hex-cell structure. The displacement close to the mirror edges is expressed by ξ w. The gradient across the mirror surface is approximated by the slope w ′ = dw/dr. The numerical values computed for δ w, ξ w and w ′ during the peak mirror temperature at t = 14 : 00 are summarized in Table 3. The FEM studies were also performed at other temperature range of interest, i.e., 0 • C−10 • C and 10 • C−20 • C. The surface deformation data was found almost similar to that given in Table 3.
Surface sag and Zernike polynomials
The thermally induced deformation data obtained from FEA in the previous section has to be transformed into a useful form to evaluate the optical performance of the mirror. A convenient way to model the deformation of an optical surface is to convert the displacement data to surface sag values. The finite element computed node displacement of an arbitrary point on the mirror surface can be expressed as d = (u 2 + v 2 + w 2 ) 1/2 , where u, v and w are components of displacement vector d along x, y and z direction, respectively. For small perturbations, the surface sag change ∆s and the node displacement at a radial point r o can be related by the following formula [26]: The sag data for the axisymmetric mirror segment was imported into MATLAB and converted into full aperture using co-ordinate transformation of the form: where θ = N π/3 and N = 2, 3, 4, 5 and 6. Equation(11) maps the surface sag data from the original mirror segment (unprimed coordinate) to other (primed coordinate) sectors. The MATLAB function TriScatteredInterp which makes use of Delaunay triangulation algorithm, was used for interpolating irregularly spaced sag data over an uniformly sampled array of points. A typical example of surface sag for full aperture Pyrex mirror is shown in Fig. 6(a). The sag values vary between 2.8µm−6.13µm and the root mean square (RMS) sag over the full aperture is 5.3µm. The RMS wavefront error is twice the RMS sag. The diurnal variation of RMS sag for Pyrex, SiC and Zerodur, shown in Figs. 6(b) and 6(c), is consistent with variation of their CTE under specified temperature range. The optical aberrations of the thermally deformed surface were further analyzed by fitting Zernike annular polynomials to 2D sag data. The main advantages of Zernike polynomials arise due to their orthogonality over the unit circle, rotational invariance, completeness property and direct relationship of expansion coefficients with known aberrations of the optical system [27,28]. Each Zernike term is a product of three components, namely, a normalization factor, a radial part and an azimuthal part of type cos mφ or sin mφ . The surface sag ∆s(ρ, φ ; ε) for an annular pupil with obscuration ratio ε, radial variable ρ and azimuth angle φ can be written in terms of Zernike annular polynomial as: where j is a single index polynomial-ordering term and a j are expansion coefficients [29]. The j-th order Zernike polynomials Z j (ρ, φ ; ε) in Eq. (12) is defines as [28] where n is radial degree and m is azimuthal frequency such that m ≤ n and n − |m| is even. The procedure to obtain expressions for radial component R m n (ρ; ε) in Eq. (13) is discussed in Ref [28]. Table 4. Low-order radial Zernike aberration terms. j Radial terms Expression Meaning 1 R 0 0 (ρ; ε) 1 Piston
Sec. spherical
We used MATLAB based open-source toolkit developed at the University of Arizona, for the Zernike analysis of the sag data [30]. The computed Zernike coefficients for three substrate materials are plotted in a bar chart shown in Fig. 7. Based on their relative amplitude, various Zernike terms are separated into two parts. The leading contribution comes from purely radial part (i.e., m = 0 case in Eq. (13)) plotted in Fig. 7(a). These terms correspond to piston ( j = 1), defocus ( j = 4), primary spherical ( j = 11), secondary spherical ( j = 22,) and higher order ( j = 37, 56, 79, 106, 137, 172) spherical aberrations that are responsible for driving the overall figure change of the mirror. Among these, the defocus term (Fig. 7) dominates for Zerodur, while primary spherical aberration dominates for Pyrex and SiC. The first 4 radial polynomials and their associated meanings are given in Table 4. The amplitude of these terms diminishes progressively with increasing order j. The pure radial and few low-frequency azimuthal terms are also reported to have strong influence on ocular aberrations encountered in vision related problems [31].
The remaining low-amplitude Zernike terms are plotted in the Fig. 7(b). The resulting amplitude of these mixed (product of radial and angular) Zernike terms is about 2-order smaller than the pure radial terms. The predominant low-amplitude terms comprising radial and angular part are given in Table 5. Ordinarily the higher order terms are not identified with any specific optical aberration. The solitary presence of cos 6φ and its harmonics can be associated with the shape of thermal print-through errors arising from the 6-fold symmetry of the hexagonal pockets. It is to be noted that amplitude of these terms (see Fig.7(b)), does not decline appreciably with increasing j. This implies that using more number of Zernike terms is unlikely to improve the fitting accuracy of the sag data further. One reason for such behavior is the limited ability of the Zernike polynomials to capture high-frequency surface imperfections as oppose to their effectiveness in capturing low-frequency global changes in the surface shape. That is why in Zernike representation, the impact of high frequency thermal print-through on surface sag is likely to be swamped by low-order figure changes induced by defocus and all spherical terms. In order to effectively visualize small-scale surface elevations, we computed residual sag by extracting the Zernike reconstructed surface from the original sag data. The mid-spatial frequency features, typical of thermal-print through affects, are clearly seen in the residual map shown in Fig. 8.
Image quality
The surface aberration data of thermally deformed mirror was analyzed using Zernike polynomials and the optical performance was evaluated in a commercial ray tracing software Zemax. A simple telescope design was created in Zemax as shown in Fig. 9. The design conforms to optical prescription of M1 given in Section 3. The aberration data can be incorporated into the optical model of the telescope by assigning a suitable surface type -the Zernike Standard Sag, to the M1 in Zemax. This surface type is commonly used for describing mechanical deformations. Selecting Zernike Standard Sag surface allows user to define the standard surface (e.g. plane, spheres, conics etc) plus additional Zernike terms that can be imposed on the standard surface to represent surface aberrations [32]. The aberration data for M1 along with number of Zernike terms, normalized radius of the aperture and a list of externally computed Zernike coefficients was directly imported into Zemax using Extra Data Editor. The M2 was assumed to be free from any surface irregularity. In Zemax, there are many ways to assess the optical performance of the system. We used spot diagrams and modulation transfer function to evaluate the performance of the telescope system. The thermal distortions are assumed to become worse when the mirror temperature reaches its maximum at t = 14 : 00 hrs. For the worst case scenario, the effect of thermally induced aberrations on the telescope image quality can be readily seen in the spot diagram shown in Fig. 10. Except for Zerodur material, the RMS spot radius (Figs. 10(a)-10(c)) for Pyrex and SiC is significantly larger than the size of the airy disc, implying the diffraction limited performance of M1 made of high CTE materials, is severely compromised by thermal heating. The contribution of low amplitude Zernike terms ( Fig. 7(b)) is estimated by setting the radial terms ( Fig. 7(a)) to zero in Zemax. Clearly, the low amplitude terms add significant scatter to the energy distribution as shown in Figs of the periodicity and shape of the 'surface bumps' caused by uneven temperature distribution resulting from underlying hex-cell structure of the M1. Another way to examine the optical performance is to compute the contrast variations of the imaging system at different spatial frequencies. The variation of contrast, i.e., MTF for Pyrex and SiC is plotted in Figs. 11(a) and 11(b), respectively. Considering the severity of the thermal distortions, a steep fall in contrast at relatively low-frequencies is not completely unexpected.
The preceding studies were also repeated for convective heat transfer coefficient h = 15 W/m 2 . The corresponding RMS spot radii for the Zerodur, Pyrex and SiC were found to be 3.3µm, 1305µm and 1798µm. This is only a moderate improvement. This shows that forced circulation of ambient air alone in high CTE materials (Pyrex and SiC) is not enough to keep the thermal distortions within tolerable limits. For effective thermal control, the temperature of the circulated air has to regulated few degrees below the ambient temperature. A detailed CFD analysis is however needed to arrive at the optimal coolant air temperature that would minimize the mirror seeing and thermal distortions of the M1. The numerical studies presented in this paper would complement those efforts. Finally, we draw the attention of the scientific community to consider the present scheme of opto-thermal analysis for optimization of the solar telescope design.
Conclusions
The temperature stabilization of a primary mirror system is one of the most challenging tasks in building a large aperture solar telescope. To achieve the desired optical performance of the telescope, the thermal response of the primary mirror needs to be accurately predicted and suitably controlled under the harsh observing conditions. For large aperture telescope, the lightweighted mirror offer significant advantages over traditionally used solid blanks. However, the pocketed cell structure created by the lightweighting process leads to geometry dependent material inhomogeneities and non-uniform temperature patterns. Besides providing physical support to the faceplate, the rib walls of the lightweighted structure also serve as discrete channels for the heat flow, giving rise to periodic thermal print-through errors across the mirror surface. We carried of a detailed thermo-optic analysis of a 2m class solar telescope mirror. A time dependent heat transfer and structural model of the lightweighted mirror was solved using FEA methods. The mirror faceplate was heated by a radiant energy from the sun. The distinct appearance of thermal print-through patterns markedly depends on the thickness of the faceplate and changing thermal environment of the surroundings. The thermal displacement FEA data was converted to surface sag values useful for evaluating the optical quality of the telescope. The Zernike analysis of the sag data showed that while the major contribution to the optical figure error comes from primary, secondary and higher order spherical aberration terms, the thermal print-through arising from underlying hex-cell geometry mainly contributes to low amplitude, mid-spatial frequency surface errors. A high scatter in ray-traced RMS spot size and a steep fall in MTF signifies a severe degradation in optical image quality of high CTE (SiC and Pyrex) materials. While SiC and Pyrex materials show a greater propensity for thermal distortions. But that alone may not be a sufficient ground to completely rule out their usages at least in moderate sized mirrors. It is also true that SiC and pyrex may not be able to match the performance level of ULE materials, but in principle, the structural instability can be better ensured by designing an efficient temperature control system. A higher convective heat transfer coefficient showed only a marginal improvement in the image quality which also reinforces the need for a better cooling mechanism to remove the excessive heat from the M1. The approach outlined in this paper would be useful to design an effective thermal control and temperature stabilization system for solar observations. | 2013-02-26T21:07:51.000Z | 2013-02-26T00:00:00.000 | {
"year": 2013,
"sha1": "bbb3708552ae9050532403ba8b34a03161476a7e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/oe.21.007065",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "3c76cb11e1826653430f783454b1d8db6a09d904",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science",
"Medicine"
]
} |
33099662 | pes2o/s2orc | v3-fos-license | Study of the Fractal and Multifractal Scaling Intervening in the Description of Fracture Experimental Data Reported by the Classical Work : Nature 308 , 721 – 722 ( 1984 )
Starting from the experimental data referring to the main parameters of the fracture surfaces of some 300-grade maraging steel reported by the classical work published in Nature 308, 721– 722 1984 , this work studied a the multifractal scaling by the main parameters of the slit islands of fracture surfaces produced by a uniaxial tensile loading and b the dependence of the impact energy to fracture and of the fractal dimensional increment on the temperature of the studied steels heat treatment, for the fracture surfaces produced by Charpy impact. The obtained results were analyzed, pointing out the spectral size distribution of the found slit islands in the frame of some specific clusters fractal components of the multifractal scaling of representative points of the logarithms of the slit islands areas and perimeters, respectively.
economics, mathematics, glasses, agents, and cognition 10-13 , etc. have some common features centered on their statistical behavior and the corresponding phase transforms 4, 5, 8, 9 and chemical reactions, particularly, as well as of some dynamic aspects 14-16 , nonlinear effects 17 , and so forth.It results that these complex systems have certain universality properties, which-due to their generality see, e.g., 8 -can be described only by some specific numbers the so-called similitude numbers, or criteria 18-20 .How could it be possible to describe dimensional physical, particularly quantities only by numbers?The answer is obtained from the examination of: a predictions of Anderson 4,5,8,9 relative to the "explosive" autocatalytic exponential growth following the spontaneous symmetry breaking inside the specific complex systems one finds that a certain dimensional parameter p has to be described by its logarithm: ln p , b Dalton's law of "defined proportions", intervening in the theory of chemical reactions somewhat similar to the phase transforms 3 : dξ , where the sign "−" corresponds to substances that disappear during the considered chemical reaction, while the sign " " corresponds to the appearing substances, finding that the degree of advance ξ of the considered reaction can be expressed by means of ln ν j , where ν j is the amount e.g., number of moles of one of the substances participating in the chemical reaction, c statistical expression of the thermodynamic entropy describing the dissipative processes , given by the Planck-Boltzmann's expression: S −k • ln ℘, where k is the the Boltzmann's constant, where ℘ is the probability density, d Claude Shannon's expression 21-23 of the individual information quantity: Á −a • ln ℘ a constant .
The simplest expression the zero-order approximation of the relation between a test physical parameter t u and the uniqueness one u is, of course, the linear expression: ln t ln t 1 s • ln u, equivalent to the power law : t u t 1 • u s .
1.1
If the uniqueness parameter u corresponds to the size L of the considered complex system, then the power law 1.1 particularizes into the fractal scaling When the relation ln t f ln L is more intricate than the linear one, presenting, for example, a certain curvature, then the existing experimental data can be divided in some groups of pairs {t k1 , L k1 ; . . .t kn , L kn } so that for each group, a specific linear relation is valid: ln t ki ln t 1k s k ln u ki , equivalent to the fractal scaling: Because the prepower coefficient t 1k and the power exponent s k depend on the group k of chosen data, it results that the set of relations {t ki t 1k • u s k ki | k 1, N} corresponds to a multifractal scaling 24, 25 .Some additional detailed studies of the different types of fractal and multifractal scaling were accomplished in the frame of works 26-28 .
Critical Findings Referring to the Work Nature, 308, 721-722(1984)
In 1984, Mandelbrot et al. 29 claimed that the fracture surfaces of metals are fractal selfsimilar over a wide range of sizes, and introduced the experimental methods named "slit island analysis" SIA and "fracture profile analysis" FPA .As the large majority of papers published by Nature average impact factor 12.86 in 1985 and 24.82 in 1996 , the aboveindicated work had a high scientific impact: we identified 30, 31 at least 26 papers and books published only in the following 10 years up to 1993, inclusively 30 , studying the fractal character of the fracture surfaces.Despite of its large impact, the hypothesis of Mandelbrot et al. 29 was somewhat restricted by the following studies: a even the papers of Underwood 32 ,Pande et al. 33 ,Lung and Mu 34 ,and Huang et al. 35 affirmed that the fracture surfaces of metals can be approximately considered to possess a certain fractal character, b Underwood and Banerji 32 concluded that the slit island analysis itself was imperfect in nature as a method for measuring the fractal dimension of fractured surfaces, c Lung and Mu 34 found that the fractal dimension was largely affected by the measuring ruler employed and postulated the concept of inherent measuring ruler, d Huang et al. 35 pointed out that how to determine the fractal dimension of a fractured surface has always been a problem of "argument", and e Williford 36 tried to explain the obtained results in terms of multifractals, but this explanation seemed not to be satisfactory for some experimental results 37, 38 , and so forth.
The detailed numerical analysis accomplished in the frame of this work pointed out that the main missing elements of work 29 are the following: a no justification of the indicated values of fractal dimensional increment from the capture of Figure 1 29 , b no analysis of the multifractal scaling of the log A f log P dependence corresponding to the slit islands areas and perimeters, respectively, c the regression line: impact energy f fractal dimensional increment from Figure 3 29 is obviously inexact, and it does not consider the corresponding possible nonlinear dependence, d the dependence of the fractal dimensional increment on the temperature of the heattreatment of the 300-grade maraging steel Charpy impact specimens studied by Figure 3 29 was not studied.
Procedure Intended to the Evaluation of the Fractal Dimension of the Slit Islands
In order to evaluate the slope of the regression line log A f log P , the numerical values of the decimal logarithms log A, log P of the slit islands areas and perimeters, respectively, indicated by Figure 1 29 were firstly evaluated by means of the scanning procedure 39 .
We obtained s ≡ D ∼ 1.6225 D − 1 i F , in considerable disagreement with the values 1.28 and 1.26 indicated by the capture of Figure 1 29 .Starting from the interpretation provided by the monograph 40, pages 64-65 of the experimental data obtained by means of the slit island method, according to whom a the cross-section of area A of the fractured material is not fractal; therefore, this area is proportional to the square of the slit island average radius R: A ∝ R 2 , while b the perimeter P of the slit island is really fractal of dimension D−1, where D is the dimension of the fracture surface ; therefore, P ∝ R D−1 , and we have found that A ∝ P 2/ D−1 and the slope of the log A f log P plot is: s 2/ D − 1 .From this relation, we obtained, in good quantitative agreement with the indicated fractal dimensional increment, i F D − 1 values indicated in the caption of Figure 1 29 as well as with the results obtained by other similar works e.g., 41 .
Study of the Multifractal Scaling of the log A f log P Dependence
Taking into account that all 6 extreme first 3 and last 3 representative points of Figure 1 29 are located under the regression line, we assumed that a nonlinear even a parabolic log A f log P expression could agree better with the experimental data reported by this figure .To check this assumption, we used the procedures of the well-known classical gradient method 42-44 in order to find the parameters of the parabolic correlation log A c 2 log P 2 c 1 log P c 0 , 4.1 which ensure the best fit of the experimental data of Figure 1 29 .The obtained results are synthesized by Table 1.
One finds that the explanation given by Williford 36 , in terms of multifractals, of the experimental data referring to the fracture surfaces is more realistic than the initial Mandelbrot's hypothesis.We have to underline that this explanation multifractals is supported also by the results obtained by Carpinteri and Chiaia 24, 25 especially for concrete samples.
The new versions of Figures 1 and 3 29 , after our numerical conversion using the method of work 39 of the experimental data indicated by these figures and the following parabolic fit for the log A f log P pairs , and the least-squares fit for the fractal dimensional increment f impact energy are presented below in the frame of our Figures 1 and 2.
Towards the Fractal Components of the Multifractal Set of Fracture Surfaces Slit Islands of the Maraging Steels Studied by [29]
Taking into account the practical continuous change of the slope of the log A f log P plot, the definition of the fractal components of the multifractal set of fracture surfaces slit islands is strongly related to the experimental accuracy of the log A, log P parameters.As the accuracy of these parameters is not known, a certain image on these fractal components
Fractal dimensional increment
Impact energy (J) The new corrected version of Figure 3 29 after the numerical conversion using the method 39 of the corresponding experimental data, and the least-squares fit of the fractal dimensional increment f impact energy dependence data.
can be obtained starting from the identification of clusters of representative points log A, log P .We defined the log A, log P clusters starting from the distances between the nearest representative points in the space log A, log P .If the distance between the nearest representative points belonging to 2 neighbor sets is considerably larger than that for the nearest such points belonging to each set, these neighbor sets correspond to the desired clusters.
Using this procedure, we have identified 6 clusters in the log A, log P space of Figure 1 29 , defined by the pairs of log A, log P coordinates corresponding to the marginal representative points of each cluster.
These clusters of representative points in the log A, log P space are gathered around some average log P i values i 1, N .For each cluster of representative points, the local slope s i 2c 2 log P i c 1 of the multifractal scaling log A c 2 log P 2 c 1 log P c 0 and the local fractal dimensional increment i Fi 2/s i were evaluated, the obtained results being synthesized by Table 2.The synthesis of these clusters features as well as the corresponding fractal dimensions or increments corresponding to each cluster as a specific representative of the fractal components of the multifractal set of fracture surfaces slit islands is presented by Table 2.
One finds that the small values of the fractal dimension correspond to slit islands of relatively small dimensions perimeters of the magnitude order of μm , corresponding to fracture surfaces not too curly, and even involving some surface breaks which could explain eventually the seldom values little less than 2 of the fractal dimension corresponding to some small parts of the fracture surface .
Study of the Fractal Dimensional Increment of the Fracture Surfaces Produced by Impact on the Temperature of the Steels Heat Treatment
Unlike the fracture surfaces produced by uniaxial tensile loading, whose characteristic parameters were reported for the 300-grade maraging steel by Figure 1 29 , the last part of this work Figure 3 29 reports the main features of the fracture surfaces produced by impact.
The evaluation of the slope s and intercept i of the regression line E imp J s • t heat i describing the impact energy to fracture in terms of the temperature of the studied steels heat treatment led us to the results: s ∼ −1.069 J/ • C, i ∼ 494.21 J with a correlation coefficient r ∼ −0.9563 and a square mean relative error of 10.05%.
Similarly, the evaluation of the slope s and of the intercept i of the regression line i F s • t heat i describing the fractal dimensional increment of the fracture surface produced by impact in terms of the temperature of the studied steel heat treatment leads to the results s ∼ 1.25 • 10 −3 • C −1 , i ∼ −0.260, with a correlation coefficient r ∼ 0.9243 and a square mean relative error of 10.971%.
One finds that, as it was expected, a the impact energy to fracture decreases approximately linearly, up to 450 • C with the temperature of the studied steels heat treatment and b the fracture surface deformation from its ideal planar shape , measured by its fractal dimensional increment, increases with the temperature of the heat treatment.
It was possible to obtain also the parameters of a more exact than that performed in the frame of Figure 3 26-29 regression line E imp J s • i F i describing the dependence of the impact energy to fracture on the corresponding fracture surface deformation fractal dimensional increment s ∼ −781.47J, i ∼ 258.40 J, correlation coefficient r ∼ −0.9442, and square mean relative error 13,31%, but we consider these last results as less important than the above-indicated ones, referring to the E imp and i F f t heat dependencies.
Investigations on the Compatibility with the Experimental Data of the Fractal/Multifractal Descriptions of the Fracture Parameters
Taking into account the errors affecting practically all experimental data, the decision about the compatibility or incompatibility of a certain hypothesis e.g., the fractal character of the fracture surfaces has to be established using some statistical tests 45-47 .Unfortunately, neither 29 nor 30, 32-38 studied statistically the compatibility of the investigated hypothesis relative to the experimental data, and even these works did not indicate the errors corresponding to the used experimental data.
In order to evaluate the error risk at the rejection of the compatibility of a certain representative point relative to the studied correlation Y i f X , it is possible to use both global for the entire correlation or local test, respectively.For example, the error risk can be evaluated by means of the expression see 44-48 where Y ik and X k are the impact energy and the fractal dimension corresponding to the representative point state k 1, 2, . . .N , Y i,tk and X tk are the impact energy and the fractal dimension corresponding to the tangency point of the confidence ellipse centered in Y ik , X k with the studied correlation plot: Y i f X , while r k , s Y ik , and s X k are the correlation coefficient and the square mean errors corresponding to the individual values Y ik and X k .Because these errors are not indicated by the studied work 29 , we will try evaluate them from other studies about the fracture energy.
The studies 31, 48 of the published works concerning the multi fractal correlations of some mechanical fracture parameters with the specimen size points out the magnitude orders of the errors corresponding to the fracture energy.The corresponding relative errors are indicated in Table 3.One finds that for concrete specimens, the average relative errors affecting the fracture energy is of approximately 7%.
Assuming that the relative errors affecting the values of the fractal dimension are considerably less than those corresponding to the impact energy approx.10% , the expression 7.1 leads to error risks somewhat larger than 2% associated to the rejection of the compatibility hypothesis of the fractal/multifractal descriptions with the experimental data.It results that the compatibility hypothesis cannot be rejected, but a more sure decision needs imperatively the knowledge of the corresponding measurement errors.
Conclusions
The accomplished study of the numerical data involved by 29 points out the following main original findings.
1 The decision concerning the fractal or multifractal character of the fracture surfaces of metals needs a previous rigorous study by means of the numerical analysis procedures.
2 In this aim, both the errors corresponding to the geometrical parameters perimeters and areas of the slit islands and to the specific mechanical parameters impact energies , respectively, are necessary.
3 Taking into account the considerable differences between the values of the fractal dimension resulting from Figure 1 29 , or indicated in the caption of Figure 1 29 , or in Figure 3 29 , we consider that the correct calculation of the fractal dimension corresponds to the interpretation from work 40 , which considers that only the perimeters of the slit islands present a fractal character: P ∝ R D−1 , while the areas of these slit islands present the usual second degree dependence on their radii A ∝ R 2 ; we have found that this interpretation 40 leads also to an agreement between the data from Figure 1 29 and the values of the fractal dimension indicated by this work 29 .
Figure 1 :
Figure 1:The new improved version of Figure129 after 29 , the numerical conversion using method 39 of the corresponding experimental data and the parabolic fit of the log A f log P pairs.
Table 1 :
Main features of the fractal: log A c 0 c 1 log P and multifractal parabolic : log A c o c 1 log P c 2 log P 2 scalings of the parameters A, P of the slit islands of fracture surfaces reported by Figure129 .
Table 2 :
The "spectral" size distribution of the clusters of representative points in the log A, log P plot involved by Figure126-29 as representing the fractal components of the multifractal scaling of the log A f log P dependence.
Table 3 :
Relative errors corresponding to the experimental data concerning the fracture energy G F for different concrete and rocks specimens. | 2017-08-17T02:08:48.250Z | 2012-01-01T00:00:00.000 | {
"year": 2012,
"sha1": "f015dce9bf74d12e49c0f4a3ddc2725a06f6feec",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/mpe/2012/706326.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f015dce9bf74d12e49c0f4a3ddc2725a06f6feec",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
3673575 | pes2o/s2orc | v3-fos-license | Motivating Mothers to Recommend Their 20-Year-Old Daughters Receive Cervical Cancer Screening: A Randomized Study
Background In Japan, the rate of cervical cancer screening is remarkably low, especially among women in their twenties and thirties, when cervical cancer is now increasing dramatically. The aim of this study was to test whether a modified government reminder for 20-year-old women to engage in cervical cancer screening, acting through maternal education and by asking for a maternal recommendation to the daughter to receive the screening, could increase their participation rate. Methods In two Japanese cities, 20-year-old girls who had not received their first cervical cancer screening before October of fiscal year 2014 were randomized into two study arms. One group of 1,274 received only a personalized daughter-directed reminder leaflet for cervical cancer screening. In the second group of 1,274, the daughters and their mothers received a combination package containing the same reminder leaflet as did the first group, plus an additional informational leaflet for the mother, which requested that the mother recommend that her daughter undergo cervical cancer screening. The subsequent post-reminder screening rates of these two study arms were compared. Results The cervical cancer screening rate of 20-year-old women whose mothers received the information leaflet was significantly higher than that for women who received only a leaflet for themselves (11% vs 9%, P = 0.0049). Conclusions An intervention with mothers, by sending them a cervical cancer information leaflet with a request that they recommend that their daughter receive cervical cancer screening, significantly improved their daughters’ screening rate.
INTRODUCTION
Worldwide, cervical cancer is the fourth most common cancer in women and the fourth leading cause of death in women from cancer. 1 In many westernized countries, it has been proven that increased screening reduces cervical cancer occurrence and mortality. 2 However, here in Japan, the cervical cancer screening rate is relatively low compared to other countries of a similar economic level.
In Japan, screening for cervical cancer is recommended to begin at age 20. When a woman approaches this age, her first invitation for screening, along with a free coupon, is sent from her local government between May and July. In addition, every 2 years thereafter, the local government usually gives all women older than age 20 some form of financial aid for cervical screening, so that the cost is either free or only several hundred yen.
In spite of this free or greatly reduced cost, the screening rate for cervical cancer among Japanese women was recently a dismally low 10.2% for 20-25-year-olds, with a somewhat better rate of 24.2% in 26-30-year-olds. 3 For contrast, in some westernized countries the screening rate is as high as 80% for these same age groups. 2 As a result of this low screening rate, but also because of changing lifestyles, the incidence of cervical cancer in Japan has recently begun increasing significantly for women in their twenties and thirties. 4 To reverse this resurgence of cervical cancer incidence and mortality in Japan, it is imperative that we begin improving the cervical cancer screening rate among our youth.
Our previous studies have revealed that a mother's attitudes towards cancer screening in general, cervical cancer, and anticancer vaccinations are closely correlated with her daughters' cervical cancer screening and vaccination rates. 5,6 Our survey interviews have revealed that teen girls, when they approach 20, usually ask their mothers for advice about cervical cancer screening.
To find out whether a modified reminder for 20-year-old women to engage in cervical cancer screening, through maternal education and asking for the mother's recommendation to the daughter to receive the screening, can increase the daughter's cervical cancer screening rate, we conducted a randomized study in two cities in Osaka Prefecture, Japan in 2015.
METHODS
We asked the local governments of two Osaka Prefecture cities, Toyonaka City (population 400,000) and Yao City (270,000), to collaborate with us. These cities usually send their 20-year-old females an invitation and a free coupon for cervical cancer screening between May and July, then in November-December they send reminder letters to any women who had not by then taken advantage of the free screening. As a result of insights obtained from preliminary interviews conducted with several 20-year-old females and their mothers, we revised the standard cervical cancer reminder leaflet for 20-year-olds, and then added a totally new leaflet made just for their mothers.
The 20-year-old females who had not yet received a cervical cancer screening by October and who still lived with their mother were listed in the order of their Japanese syllabary and then randomized into two study groups. In November and December, the informational leaflet meant for the 20-year-old daughter was sent to one group only. In the other group, both the daughter leaflet and the mother leaflet were sent together in one envelope ( Figure 1).
The leaflet for the daughter consisted of an informational section, which discussed cervical cancer and the importance of getting screening for cervical cancer, along with a cartoon strip of a conversation between a daughter and her mother (eFigure 1). The left page of the leaflet for the mother consisted of suggestions for simple words which she could use to talk to her daughter about cervical cancer and cervical cancer screening. On the right page were more detailed explanations of cervical cancer and cervical cancer screening. The leaflet implored the mother to recommend to her daughter that she receive cervical cancer screening ( Figure 2). Finally, we followed these two groups until the end of March 2015, when the resulting rates of cervical cancer screening were collected and compared. For comparison, we used data obtained from a separate study we had conducted in Hirakata City during the previous fiscal year. 7
Statistics
We used Fisher's exact test and the Chi-square test for statistical analysis. The level of statistical significance was set at P = 0.05.
Ethical consideration
The Osaka University Hospital Medical Ethics Committee approved this research.
Financial support
A Health and Labor Research Grant (H26-innovative-cancerresearch-general-102) and a Japan Agency for Medical Research and Development Grant (15ck0106103h0102) supported this research.
RESULTS
In December 2014, there were 2,548 females living in the two cities who met our study criteria: being 20 years old, living with their mother, and not having received cervical cancer screening as of October, 2014. We sent only the reminder leaflet meant for the daughter to 1,274 of these females. The other 1,274 were sent both a revised reminder leaflet for the daughter and an informational leaflet for the mother (Figure 1). In the group for which only a leaflet for the daughter was sent, the number of girls who received cervical cancer screening by March of 2015 was 9% (115=1,274), whereas it was significantly higher (11% [146=1,274]; P = 0.0049, Table 1), in the group for which the intervention actively involved the mothers.
The total number of eligible 20-year-old females in the two cities was 3,112; 2,548 lived with their mother (and were part of this study). Of the total 3,112 eligible, 386 received cervical cancer screening sometime during fiscal year 2015. Thus, the overall rate of cervical cancer screening was 12% among the total eligible 20-year-olds (Table 2).
DISCUSSION
Following receiving their first invitation and free coupon from May through July for a screening, the initial rate (before October) among 20-year-old females of getting a cervical cancer screening in our two study cities was abysmally low. Thus, any increase in cervical cancer screening for the hold-outs as a result of a more effective November reminder letter would be of great significance. There are several reports that educational leaflets about health promotion in the form of a comic strip are effective with youth. 8,9 However, this is the first report of a randomized controlled study showing that positively influencing a mother's recommendation for cervical cancer screening to her 20-yearold daughter with a comic strip is also an effective means of increasing the rate of screening, moving the dial from 9% to 11% in the study group also receiving the mother's leaflet. Since the implementation of free coupons for increasing the rate of cervical cancer screening in Japan, this modified reminder leaflet approach has so far proven to be one of the most effective interventions yet undertaken to further improve screening rates. In comparison, we conducted a parallel study in the Osaka Prefecture city of Hirakata. 7 During May of fiscal year 2013, only the standard invitation plus free coupon for cervical cancer screening was mailed out by the Hirakata government to all eligible 20-year-old women. As follow-up that year, the standard reminder letters were mailed during January. There was a very small number of women who had already received screening on their own accord in April of 2013, before they received their free coupon in May. Even when we included them, the screening rate over the 8 months prior to the reminder, from April to December, was only 6.4%.
Starting in January of 2014, the standard governmental reminder leaflet was sent to all the remaining eligible 20-yearold women in Hirakata who had not yet been screened by that date. By March of 2014, after the standard reminder, the screening rate for the final 4 months among those who had not been screened by December of 2013 was only 3.6% (the number screened from December 2013 through March of 2014, divided by the residual number who had not been screened prior to the January reminder). We compiled data from these two periods and compared them with results from our study done in fiscal year 2014, using our two revised leaflets.
In May of 2014, the standard invitation and free coupon from the Hirakata government were again sent to all eligible 20year-old females. However, in contrast to the previous year, in November and December of 2014, our revised daughters' and mothers' reminder leaflets were sent out to all women who had not yet received cervical cancer screening. Everyone got the same combination packet because the government of Hirakata would not allow us to conduct a randomized study where half of the eligible group got different information from the other half. 7 In May of 2015, for the initial call for cervical cancer screening, the Hirakata government sent our revised daughter leaflet, along with the free coupon, to all their eligible 20-year-old women. The rate of cervical cancer screening that occurred before a reminder was sent out in January was significantly higher than the same time period (from invitation to reminder) during the previous year (data not submitted).
We note that the resulting 2% difference between our two study groups, 9% versus 11%, after adding a letter to the mother, was not as dramatic. Regrettably, we could not include a study arm using only the older standard reminder letter for contrast to the revised reminder.
The correlation between a daughter's cancer screening and her mothers' knowledge of health-related information is poorly reported in Japan because a person over 20 is regarded as an independent adult. However, by interviewing younger girls, we found that most teenage girls in Japan think that, when they become 20, they will ask for their mother's opinion about cervical cancer screening. This bodes well, along with the findings we previously reported that the level of a mother's own cancer health knowledge and cancer screening consultation behavior correlates well with whether or not they had, or would, recommend to their daughters that they should receive cervical cancer screening. 4 It is important to note that Japan is somewhat exceptional, in that 74% of its 18-24-year-old females are still living with their mothers, whereas only 39% and 44% do so in the United States and United Kingdom, respectively. 10 A mother's influence is far greater when the daughter is still living with her. This manner of a mother's influence over cervical cancer screening is also expected to be effective in other countries where 'young' daughters commonly live with their mothers. As a bonus, an informational leaflet directed at the mother may better educate her, improving the mother's own cervical cancer screening rate. We are now planning to examine the cancer screening rate of the mothers who received our leaflet.
One limitation of this study is that certain participant background factors, such as the socio-economic status among the randomized groups, could not be taken into account because Toyonaka City and Yao City do not permit revelation of this type of personal information.
The proportion of females who had prior sexual experience was also unknown, and such data would have had a large degree of uncertainty anyway because it would require self-reporting, which is suspect. In Japan, roughly 60% of 20-year-old women are reported to have had sexual intercourse experience. 11,12 Cervical cancer screening is applicable for women who have not had penetrating sexual intercourse experience because HPV is also transmitted by oral sexual encounters, touching a partner's genitals, or being touched on the genitals by a partner in ways that many respondents might not consider or report as 'intercourse'. 13,14 However, there are few cases of cervical cancers that are not related to HPV. Therefore, because there is no means to know for sure who has and has not been exposed to HPV, and because the risks from cervical cancer are so high, local governments proactively send a free coupon for cervical cancer screening to all 20-year-old females.
A second limitation of this study is that our approach will be most effective for daughters who still live with their mother; it remains to be seen how effective this will be for daughters and mothers who live separately. In addition, it is not obvious yet how the mothers are communicating their pro-screening recommendations to their daughters. Recently, a follow-up questionnaire about how they used the leaflets was sent to the mothers and daughters. The results of the questionnaire are now under analysis.
Although our combined daughter and maternal education approach has been somewhat effective, we still have to greatly improve the cervical cancer screening rate in Japan (to as high as 90%) if we are ever going to significantly reduce our alarming and ever increasing incidence of cervical cancer. One approach would be to institute an enhanced health education program in junior high school about cervical cancer and cancer screening. Another approach would be to establish an improved registry and invitation=reminder system throughout Japan, even though such efforts may have been conducted by some local governments on a limited basis already. Egawa-Takata T, et al.
In conclusion, we have found that an interventional education reminder leaflet for cervical cancer screening sent to the mothers of age-eligible girls, combined with a request that the mothers recommend to their daughters that they receive a free cervical cancer screening, significantly improved their daughters' cervical cancer screening rate. However, to improve cervical cancer screening to the point that it will significantly impact the incidence of cervical cancer in Japan will require major efforts in public education, beginning in junior high school. Other methods will need to be investigated as well. | 2018-04-03T01:46:39.415Z | 2017-11-11T00:00:00.000 | {
"year": 2017,
"sha1": "9fa69b676f0dbb6a59d39e321e7554ebdc8a8487",
"oa_license": "CCBY",
"oa_url": "https://www.jstage.jst.go.jp/article/jea/28/3/28_JE20160155/_pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e19417cf69fceca4f83c93edf0bb4ffae93fbcc5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245230983 | pes2o/s2orc | v3-fos-license | Enteric Coated Oral Delivery of Hydroxyapatite Nanoparticle for Modified Release Vitamin D3 Formulation
Vitamin D3 (VD) and calcium phosphate play a vital role in bone homeostasis. Factors such as obesity or gastrointestinal problems can render the use of pure VD and calcium phosphate supplements ineffective. This study investigated the possibility of using VD-loaded hydroxyapatite nanoparticles for the codelivery of VD and Ca3(PO4)2. Due to the high affinity of Ca3(PO4)2 for bone tissue, HA is an ideal delivery system to deliver VD to target tissue. Herein, HA nanoparticles were synthesized and loaded with VD using a vacuum evaporation method. The synthesized HA-VD nanoparticles were morphologically and chemically characterized by SEM, FTIR, and TGA. The system exhibited a two-stage release pattern, which includes a first-day burst release (35%) and sustained release for a further ten days. The cytocompatibility and cell penetrative ability of the nanoparticle system were assessed in vitro using preosteoblast cells: the system is nontoxic and well-tolerated. Finally, the VD-loaded HA nanoparticles were coated with a gastroresistant polymer, hypromellose phtalate-55 (HP-55) in order to protect the pH-sensitive HA from degradation at lower pHs. A coaxial electrospray technique was employed to achieve this. In all, the tested HA-VD system is a viable alternative for codelivery of VD, Ca, and PO4 3to their target tissues.
Introduction
Vitamin D is a lipophilic group of vitamins with a steroidal origin. Though there are five types of vitamin D (vitamins D 1 -D 5 ), the most important are cholecalciferol (vitamin D 3 / VD) and ergocalciferol (vitamin D 2 ). Vitamin D 3 -which exists in dietary supplement formulations-is the major type over the plant-derived VD 2 [1,2]. For the majority of the population, however, the principal source of vitamin D is synthesis following exposure of the skin to UVB radiation, converting 7-dehydrocholesterol to the provitamin cholecalciferol [1,3].
Vitamin D plays a vital role in in vivo homeostasis and metabolism, as well as heavily affecting bone growth. Obese individuals have a higher risk towards developing inadequate serum VD concentrations due to its accumulation in the adipose tissue, which serves as a reservoir of VD [4,5]. Thus, a delivery system of pure VD is not a suitable method for the correction of VD deficiency, since most of the administrated VD tends to accumulate naturally inside adipocytes. In addition, regular administration of such formulations may cause VD-related toxicity [6,7]. The solution is a controlledrelease VD delivery system with a high affinity to target sites. There are several existing targeted delivery systems for VD, such as micelles, dendrimers, liposomes, lipidic nanoparticles, and carbon and silicon nanotubes [7][8][9].
Calcitriol, the metabolite that is the active form of vitamin D, expresses its biological activities by binding to the vitamin D receptor (VDR), which is located chiefly in the nuclei of target cells. VDRs are found in many tissues of the body including the skin, bone, muscle (both skeletal and cardiac), and endocrine, as well as throughout the immune system [10,11]. Calcitriol-bound VDRs act as transcription factors that initiate gene expression to produce transport proteins such as TRPV6 and calbindin, which are involved in calcium absorption in the intestine. Due to their lipophilic nature, VD absorption is blocked when there is insufficient lipids in the intestine. To improve its bioavailability, it is important to enhance its water solubility [12][13][14].
Most cases of VD deviancies are associated with deficiencies of calcium and phosphorus due to a lack of mechanisms to absorb these ions through the gut epithelium. Codelivery of VD with calcium and phosphate can potentially resolve this problem [15,16]. Recently, hydroxyapatite (HA; Ca 10 (PO 4 ) 6 (OH) 2 ) has enjoyed heightened attention in the biomedical field due to its exceptional features in biocompatibility, bioactivity, osteoconductivity, and osteoinductivity [8,17,18]. The use of VD-loaded HA nanoparticles resolves most of the problems associated with both vitamin D-, Ca 2+ , and PO 4 3deficiencies. In this study, a laboratory-synthesized composite of hydroxyapatite nanoparticles loaded with vitamin D 3 has been evaluated as a possible delivery system of vitamin D with a sustained release profile. The synthesis and morphology of the HA nanoparticles are confirmed with electron micrographs. The loading of VD is confirmed by thermal analysis and FTIR data together with assessment of drug release kinetics. Finally, toxicological and cell penetration profiles of HA and HA-VD systems were evaluated in vitro using a murine preosteoblast cell line.
Particle Synthesis
(1) Preparation of HA Nanoparticles. First, a 0.50 M calcium sucrate solution was prepared according to our previously published method. The required amount of CaO was added to the sucrose solution followed by stirring for 12 h. Calcium sucrate was then reacted with ammonium dihydrogen orthophosphate to synthesize HA. The Ca:P mole ratio was kept at 1.67 : 1. The mixture was stirred for 24 h, and our previously published method was modified by hydrothermally treating the resulting HA precipitate at 150°C for 12 h. Finally, precipitates were centrifuged at 5000 rpm for 15 min and washed with 50.0 mL of distilled water thrice in order to remove impurities. The products were dried in a vacuum oven at 60°C, 600 mm Hg for 12 h [19,20].
(2) Preparation of VD-Loaded HA Nanoparticles. VD was composited with prepared HA nanoparticles using a standard vacuum evacuation process. HA (2 g) and VD (250 mg, maintaining an 8 : 1 w/w ratio) were dispersed in 10 mL of ethanol for 1 h at 600 rpm followed by ultrasonication for 15 min. The flask containing the resultant HA-VD suspension was evacuated using a vacuum pump for 10 min until a slight fizzing of the suspension was observed, indicating the removal of entrapped air. After the fizzing stopped, the suspension was kept uninterrupted for 10 min to reach equilibrium, and the entire vacuum evacuation cycle was repeated thrice to promote VD's inclusion with HA. Following this, the suspension was centrifuged and rinsed twice using ethanol to remove excess VD. The UV absorbance of the supernatant of the rinsed HA-VD was measured at 275-280 nm, and the encapsulation percentage was calculated based on a standard calibration curve of VD [21].
(3) Preparation of HP-55 Coated HA-VD Nanoparticles. The electrospray system consisted of a high voltage power supply, a coaxial needle (Linari Nanotech; inner and outer needle diameters of 0.5 mm and 0.6 mm, respectively), two syringe pumps, and a collector. The distance between positive electrode (needle) and the negative electrode (collector) was 20 cm, and the applied voltage was 12.0 kV (0.02 μA). The core solution of the coaxial system contained aqueous suspension of 10% (w/v) HA-VD NPs while the shell solution contained 5% (w/v) methanolic solution of HP-55 polymer (prepared at pH > 10). The two solutions were sprayed to 0.2 M HCl solution with flow rates of 0.05 mL/h (core) and 0.11 mL/h (shell). The synthesized particles were collected by centrifuging and washing at 1500 rpm [22,23].
Characterization of Nanoparticles
(1) Morphological and Thermal Analysis. The morphologies of nanoparticles synthesized using the methods set out above were evaluated using field-emission scanning electron microscopy (SEM) (Hitachi SU6600 setup) and transmission electron microscopy (TEM) (Jeol 2100). All SEM samples were subjected to gold sputtering prior to analysis. Energy dispersive X-ray (EDX) spectroscopy studies were carried out to confirm the VD impregnation onto the walls of HA-NPs with a scanning rate of 192 000 CPS for 4.5 min.
The thermal stabilities of the synthesized HA-NP and HA-VD were determined by thermogravimetric analysis (TGA) (STD Q600 setup) over a temperature range of 25 to 1000°C at a ramp of 20°C/min in a nitrogen medium.
(2) Chemical Characterization. Fourier-transform infrared (FT-IR) spectroscopic analysis was performed in order to confirm VD loading onto HA nanoparticles and HP-55 coating. All spectra were obtained over the 4000-500 cm -1 region with 32 scans per measurement at a resolution of 4 cm -1 using a Bruker Vertex 80 Fourier-transform infrared spectrophotometer (Bruker, USA). The spectrophotometer was equipped with a L-alanine doped triglycine sulfate 2 Journal of Nanomaterials (DLaTGS) detector and MIRacle single-reflection horizontal attenuated total reflectance (ATR) accessory (PIKE Technologies, USA) working at room temperature.
Drug Release Studies.
The release profile of VD from the enteric-coated HA-VD composite was assessed in phosphatebuffered saline (PBS) (pH = 7:4) containing 0.5% (w/v) sodium azide (to prevent microbial contamination). 200 mg of HA-VD was added to 5 mL of PBS at 30°C under 200 rpm stirring. At predetermined time points, 0.2 mL aliquots of the sample were removed and diluted with 0.8 mL of ethanol (1 : 5 dilution). The concentration of VD was determined by absorbance measurements at 275-280 nm (Shimadzu UV-3600 UV − Vis−NIR spectrophotometer) with respect to a preprepared standard curve [24,25]. In order to eradicate the effect of HP-55 for UV-vis absorption, 5% (w/v) HP-55 dissolved in pH 10.0 buffer solution was used as the control.
After the desired period, the mixture was sonicated for 2 h at room temperature to establish the loading capacity of VD. The cumulative drug release was assessed using equation (1) where W i is the weight of the VD in the solution and W t is the total VD of added nanomaterial.
(1) Loading Capacity of VD. 200 mg of enteric coated HA-VD material was mixed with 5 mL of pH 9 borate buffer solution in order to remove the coating. After 15 min with continuous stirring, the resultant suspension was centrifuged at 3000 rpm for 10 min, and the pellet was dissolved in 5 mL of 0.1 M HCl solution with continuous stirring for 2 hrs. The released amount of VD was quantified using the previously described UV-vis spectrophotometric technique with respect to a preprepared standard curve (buffer as control). The loading capacity was calculated using equation (2).
Loading capacity % ð Þ = Weight of the drug in nanoparticle Weight of the nanoparticles × 100:
Cytocompatibility and Cell Penetration Studies
(1) Nanomaterial Preparation and Cell Seeding. The murine preosteoblast cell line OB6 was cultured in α-MEM medium containing 15% (v/v) FBS and 1% (v/v) penicillinstreptomycin. The culture was maintained in a humidified incubator at 37°C with 5% CO 2 , and the culture medium was replaced every 3 days. After 80% confluency was reached, the cells were subcultured using 0.25% (v/v) trypsin-EDTA solution. A cell density of 1 × 10 4 was used to seed each well of a 24-well plate which contained HA and HA-VD, with a seed volume of 300 μL per well, and incubated for 3 h to facilitate initial cell adhesion. Following the 3 h incubation, 700 μL of the α-MEM was added to the each.
(2) Live Cell Assay. Live cell assay was performed for cell seeded nanomaterials at day 3, according to the manufacturer's protocol. Briefly, 5 μL of 4 mM calcine was added to 10 mL of 1x DPBS to make the live assay solution. After removing the α-MEM growth medium from each well, wells were washed with 1x PBS twice. Then, 300 μL of prepared live assay solution was added to each well containing nanoparticles and incubated at room temperature for 15 min. The cells were observed using cell imaging fluorescence microscopy for cell distribution and proliferation with nanoparticles.
(3) Cell Viability Assay. The water-soluble tetrazolium salt (WST-1) assay was used to measure cell viability after nanomaterial application. After standard 24 h incubation, all media in the wells were removed, and the wells washed with PBS. Fresh α-MEM medium was added to each well with 10% (v/v) WST-1 reagent. The well plates were shaken for 2 min at 300 rpm for homogeneous mixing of WST-1 with the α-MEM medium. The plate was then incubated for 6 h at 37°C and 5% CO 2 . Following this, 100 μL from each well was transferred to a 96-well plate and absorbance readings taken using a UV spectrophotometer (SpectraMax190, Molecular Devices) at a wavelength of 440 nm. All experiments were done in triplicate, and mean cell viabilities were calculated against the negative control (cells only). Blank wells that contained nanomaterials without cells were also assessed [26,27].
Results and Discussion
3.1. Morphological Properties. The SEM image of HA nanoparticles is provided in Figures 1(a) and 1(b) while TEM image is given in Figure 1(c). The synthesized HA particles were observed to be of a rod shape. HA nanoparticles were observed to possess an average diameter of approximately 20-60 nm and length of approximately 100-250 nm. Thus, the successful synthesis of HA nanoparticles was confirmed by SEM and TEM imaging. HP-55 coated electrospray HA-VD nanoparticles are shown in Figures 2(a) and 2(b). According to the surface morphology, the nanoparticles possess a smooth outer surface, and the HA-NPs are well covered with HP-55. The coating layer was detected by element mapping and EDX analysis which is provided in Figures 2(c) and 2(d). Therefore, the required protection from gastric acid can easily achieved by this coating. Electrospray as a technique has several advantages over conventional coating systems such as 3 Journal of Nanomaterials spray, dip, and vacuum coating. It produces well covered uniform layer of coating material. Therefore, pharmacokineticrelated parameters such as dissolution and disintegration are less affected.
3.2. Thermal Analysis. Thermogravimetric analysis (Figure 3(c)) was carried out to assess any changes to the thermal stability of the raw materials when composited and to estimate the amount of VD loaded into the HA nanoparticle. The initial weight loss at temperatures below 100°C can be attributed to the elimination of surface-bound water. The mass loss due to the degradation of VD begins at 235°C in both the pure-and HA-bound samples [31]. The weight loss of the HA-VD indicates that up to 9.4% (w/w) of the final composite consists of VD alone. Moreover, the temperature at the maximum mass loss rate of HA-VD had shifted from 460 to 490°C compared to pure VD. The thermal degradation rate had also been reduced in the HA-VD composite in comparison. This signifies the increased thermal stability of the HA-VD composite, due to the presence of HA and its high inherent thermal stability.
Chemical Properties.
The FTIR spectra of pure HA (A), pure VD (B), and HA-VD (C) are given in Figure 3(a). Theoretically, there are four P=O vibrational modes present for the phosphate ions of HA, ν 1 (980), ν 2 (470), ν 3 (1050, 1085, 1090), and ν 4 (660, 520). Out of these vibrational modes, ν 3 is the most prominent peak; this appeared at 1030 cm -1 in , traces may be observed due to difficulties in its removal following synthesis. The carbonate ion has a very strong transmittance, and the corresponding peaks are usually observed even at very low concentrations. The hydroxyl stretch is observed at 3338 cm -1 , broadened due to the presence of intermolecular hydrogen bonding. There is an additional weak transmittance band at 3573 cm -1 characteristic of the vibration of O-H (here present as Ca(OH) 2 ). All of these peaks relevant to HA are present in the HA-VD composite [32,33].
The FTIR spectrum of pure VD (cholecalciferol) displays all major characteristic theoretical peaks. The strong band at 990 cm -1 is caused by the bending of C=C alkene groups in the trans configuration, while the medium strength band at 1639 cm -1 is caused by the stretching of trans C=C bonds. The peak at 1030 cm -1 is due to the stretching of the C-O bond of primary alcohols. It is a strong peak, but in the HA-VD spectrum, it overlaps with the P=O vibrational peak of HA. VD also has a number of C-H alkane bonds, and the corresponding peaks for the bending vibration of these bonds appear at 1362 cm -1 . The peak at 1432 cm -1 is due to the bending vibration of methyl C-H bonds, caused by four such methyl groups in the VD structure. The intensity of these two peaks (1362 and 1432) has reduced intensity in the composite spectrum due to the effect of HA overlaps. The characteristic broad peak at 3338 cm -1 is due to the presence of intermolecular H-bonded O-H stretching [33][34][35].
In the HA-VD spectrum, this peak also overlaps with the prominent hydroxyl peak of HA. Most of VD's characteristic IR peaks are observable in the HA-VD spectrum with minor shifts, indicating successful loading of VD onto HA. However, all peak intensities of the VD spectrum have been reduced due to the masking effect of HA, which makes up proportionally more of the composite than VD does.
The FTIR spectrum of HP-55 coated HA nanoparticle (Figure 3(b)) reveals that the HA nanoparticles are extensively covered by the polymeric material. All major peaks of pure HP-55 are also noticeable in HP-55 coated HA with slight deviation of the wavenumber. In pure HP-55, the major peaks are at 950, 1052, 1263, 1715, 2933, and 3448 cm -1 while in HP-55 coated HA 950, 1072, 1286, 1720, 2895, and 3475 cm -1 , respectively. This may be due to the presence of HA.
Both SEM images and FTIR data confirm the HP-55 coating of HA-VD nanoparticles. HP-55 is an acid stable polymer where only dissolve in basic pH range. HA undergoes acid degradation when the pH is lesser than 5. Therefore, bare HA nanoparticles are not suitable for oral administration. However, successful coating of HP-55 indicates the protection of HA from gastric degradation, and HA nanoparticles release only within the small intestine where the pH is basic.
Cytocompatibility and Cell Penetration Studies.
The cytotoxicity study was done using preosteoblast cells, which are precursors to osteoblasts. Osteoblasts are the major type of cells responsible for bone and cartilage regeneration. These cells also have a higher affinity for calcium and phosphorus and can absorb and accumulate these ions within the cytosol. The bare HA nanoparticle and its VD-loaded counterpart exhibit similar cytotoxicity profiles. Figure 4 shows the live cell assay images of nanomaterials at day 3 compared to the negative control. The lowest number of live cells was observed in HA-treated wells, but a significant difference was not observed between HA-VD and the negative control. According to the cell viability data (WST-1) after 24 h incubation, bare HA-treated cells showed 79 ± 4% viability while HA-VD induced 85 ± 5% viability relative to the negative control (100%). The results indicate that both HA and HA-VD systems are nontoxic, as well as cytocompatible. However, bare HA appears slightly more toxic than VD- 5 Journal of Nanomaterials loaded HA. The cytotoxicity of HA mainly depends on size, shape, and the surface charge of the nanoparticle. Rod-, needle-, and oblong-shaped nanoparticles show higher toxicity profiles than spherical ones [36,37]. In this study, reduction of toxicity observed on the incorporation of VD may be due to the masking of HA's polar surface groups by VD or the replacement of some amount of HA by VD which is nontoxic.
The cell penetration study of HA was performed using the same cell line in order to identify the ability to enter HA-NPs through osteoblast cell membrane hence accumulation inside the cytosol. The concentration of Ca 2+ in the supernatant of untreated samples was under the level of detection (LOD) of 1 μg/mL, while treated samples showed a significantly high concentration (22 μg/mL) of Ca 2+ . The only possible cause of increasing intracellular Ca was the addition of HA-NP to the medium; the results suggest a high penetrative ability/uptake of HA-NP through the plasma membrane of osteoblast cells. In general, intracellular free Ca 2+ concentration is significantly lower than extracellular concentration (according to published data, around 12,000x lower). Increase of intracellular free Ca 2+ concentration may cause calcium dependent toxicities, hence reducing cell growth and differentiation, although the cytotoxicity study showed HA-NPs' apparent nontoxicity towards the tested preosteoblast cells. It can therefore be further concluded that the high level of intracellular Ca is mainly due to the HA-NPs acting as a Ca reservoir and maintaining a balance between free Ca 2+ ion and insoluble Ca [38][39][40].
UV-vis Characterization of VD and Drug Release Study.
Since VD is a highly lipophilic molecule, water solubility is minimal. Therefore, absolute ethanol was used as the solvent for UV-vis characterization. The UV-vis absorbance pattern of different concentrations of VD is shown in Figure 5(a). A uniform pattern of absorption maxima was identified between 200 and 300 nm, and maximum absorbance (λ max ) was positioned within the 275-280 nm region. A bathochromic shift was observed proportional to the concentration. This may be due to the solvatochromism and interchanging bonds between solute and solvent. Also, there is a high probability of tiny micelle formation by the interaction of VD with solvent molecules [41,42]. Therefore, concentration determinations were done by measuring absorbance between 275 and 280 nm rather than using a single λ max value. The relevant standard curve for VD is shown in Figure 5(b). Linearity of absorbance of VD was observed within the 100 ppm-1000 ppm range. All concentration calculations for the release study were done based on this curve. Figure 6 shows cumulative VD release percentage curves of the HA-VD nanocomposite at 37 ± 0:5°C as a function of time. The trend of the graphs also fits with the typical diffusion of the small molecule. At day 1, 35:4 ± 3:2% (w/w) (3.8 mg) of the loaded drug was released, and by day 10, 86:7 ± 5:6% (w/w) (9.2 mg) of the drug was released. There is a burst release of VD during the first two hours, and approximately, 25% (w/w) of VD is released within the first 30 min. This is further evident from the drug release profile, where the first release phase (up to 2 h) is ascribed to the rapid release of the surface-grafted VD and the second phase is attributed to the delayed release (up to 15 days) of the entrapped drug molecules within the HA nanoparticles. There is up to 15% (w/w) of unreleased VD even after the assessed period of release.
According to the release study, the amount of total encapsulated VD is 10.86 mg (100%). Since the amount of HA-VD used for release study is 100 mg, the loading percentage of VD is 10.86% (w/w). The loading percentage given by the release study is close to the loading percentage given by TG analysis (9.4% w/w). Low loading efficiency is commonly associated with VD delivery systems such as polymers, dendrimers, emulsions, and nanotubes. The highest loading efficiency observed in literature was encountered with liposomes and micelles, with efficiencies up to 40 and 55%, respectively [6,9].
The characteristic initial burst release can be another major pitfall since a large amount of VD is lost before reaching the target tissue [9]. Compared to other VD delivery methods, this HA-VD system shows a moderate first phase of the release profile equal to 25% of loaded VD released within the first five hours. Thereafter, the sustained release apparently maintains the serum VD level in a steady state. Conclusively, the release pattern represents typical VD requirements, and the transferable VD amount can be titrated by changing the loading percentages-by simply changing the VD:HA ratio.
There is a considerable body of extant literature on the topic of nanoparticle-based VD delivery systems. Among Journal of Nanomaterials them, biopolymeric nanoparticles [43][44][45][46][47], nanoemulsions [48][49][50], and colloidal systems [51] are prominent. Chitosan [47,48], alginate [44], soy protein [46], polylactic acid [43], and PLGA [52][53][54] are the main types of materials incorporated with VD. VD delivery systems with inorganic nanoparticles appear not to be as popular. Ignjatović and colleagues prepared hydroxyapatite (HA) and PLGA-based nanoparticles for the local delivery of VD to enhance osteogenesis and bone tissue differentiation [52]. However, there is no literature examining oral delivery formulation of VD with solely HA-based nanoparticles. Hence, this is the first record of evaluating the applicability of HA nanoparticles on their own for oral delivery of VD, also supplying Ca 2+ and PO 4 3-in one formulation to address a host of health problems in one package.
Conclusions
Nanoparticle-based drug delivery systems are a rapidly growing field, particularly making use of materials having biodegradable and biocompatible properties-hydroxyapatite is an example of such a candidate. Current vitamin D delivery systems are associated with various problems such as high burst release, accumulation inside adipose tissue, and suboptimal biocompatibility. In this study, HA was shown to be a good alternative for the oral delivery of VD due to its high affinity to bone tissue. Further, HA also acts as a source of Ca 3 (PO 4 ) 2 , and the HA-VD composite synthesized herein is thus a type of codelivery system, providing both VD and Ca 3 (PO 4 ) 2 . HA-VD nanomaterials showed an extended release profile for ten days, and the particles were found to be biocompatible. Electrospray was used to coat HA with the enteric polymer HP-55, generating wellcovered particles resistant to gastric pH. This study suggests the use of HP-55 coated VD-loaded HA nanoparticles as a potent alternative for sustained and targeted oral delivery of VD with Ca 2+ and PO 4 3-.
Data Availability
The data generated through the study given in main text and as a supplementary material. Further clarification and information are available upon request from the corresponding author.
Conflicts of Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Funding
The research was financially assisted by the university research grant ASP/01/RE/AHS/2021/90.
Supplementary Materials
Supplementary Materials SEM and TEM images of synthesized nanoparticles and ICP-MS data are given as supplementary material. (Supplementary Materials) | 2021-12-17T16:44:31.818Z | 2021-12-15T00:00:00.000 | {
"year": 2021,
"sha1": "b8f1f4796a5ce64f9364012ccba4be19ad77ae82",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jnm/2021/9972475.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "133a148db7418252d17ad21970de40c353becf99",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": []
} |
239468025 | pes2o/s2orc | v3-fos-license | Simultaneous Determination of Albendazole and Its Three Metabolites in Pig and Poultry Muscle by Ultrahigh-Performance Liquid Chromatography-Fluorescence Detection
A fast, simple and efficient ultrahigh-performance liquid chromatography-fluorescence detection (UPLC-FLD) method for the determination of residues of albendazole (ABZ) and its three metabolites, albendazole sulfone (ABZ-SO2), albendazole sulfoxide (ABZ-SO), and albendazole-2-aminosulfone (ABZ-2NH2-SO2), in pig and poultry muscle (chicken, duck and goose) was established. The samples were extracted with ethyl acetate, and the extracts were further subjected to cleanup by utilizing a series of liquid–liquid extraction (LLE) steps. Then, extracts were purified by OASIS® PRiME hydrophilic-lipophilic balance (HLB) solid-phase extraction (SPE) cartridges (60 mg/3 mL). The target compounds were separated on an ACQUITY UPLC® BEH C18 (2.1 mm × 100 mm, 1.7 μm) chromatographic column, using a mobile phase composed of 31% acetonitrile and 69% aqueous solution (containing 0.2% formic acid and 0.05% triethylamine). The limits of detection (LODs) and limits of quantification (LOQs) of the four target compounds in pig and poultry muscle were 0.2–3.8 µg/kg and 1.0–10.9 µg/kg, respectively. The recoveries were all above 80.37% when the muscle samples were spiked with the four target compounds at the LOQ, 0.5 maximum residue limit (MRL), 1.0 MRL, and 2.0 MRL levels. The intraday relative standard deviations (RSDs) were less than 5.11%, and the interday RSDs were less than 6.29%.
With increasing large-scale breeding, various parasitic diseases are seriously hindering economic output in the animal husbandry industry [1]. Many kinds of parasites infect animals, and parasites can parasitize various organs and tissues. ABZ has a good effect on gastrointestinal nematodes, lung nematodes, worms, and ectoparasites in pigs [2,3]. This drug is especially effective against Ascaridia galli and other worms in chickens [4]. Therefore, in many countries, ABZ is often used to treat these infections. However, some breeders do not use these drugs according to regulations, and abuse them in pursuit of greater economic benefits. These behaviors cause large amounts of these drugs to remain in animal-derived foods, such as muscle, fat, and internal organs, which can endanger the health of consumers [5]. In addition, animal toxicological studies have shown that ABZ and its metabolites may cause malformations and embryonic lethality [6,7]. Therefore, it is necessary to develop efficient and rapid detection methods for residues of ABZ and its three metabolites in animal products, to avoid harming consumer health. Most countries have stipulated the maximum residue limit (MRL) of ABZ in animal products [8,9]. The Ministry of Agriculture and Rural Affairs of the People's Republic of China, the Codex Alimentarius Commission (CAC), and the European Union (EU) have stipulated that the MRL of ABZ in the muscles of all edible animals should not exceed 100 µg/kg, and the limit of the ABZ residue is based on the sum of ABZ and its three metabolites. This study adopted an MRL of 100 µg/kg in animal muscle as the standard, and developed a simultaneous detection method of ABZ and its three metabolites in pig and poultry muscle. breeders do not use these drugs according to regulations, and abuse them in pursuit of greater economic benefits. These behaviors cause large amounts of these drugs to remain in animal-derived foods, such as muscle, fat, and internal organs, which can endanger the health of consumers [5]. In addition, animal toxicological studies have shown that ABZ and its metabolites may cause malformations and embryonic lethality [6,7]. Therefore, it is necessary to develop efficient and rapid detection methods for residues of ABZ and its three metabolites in animal products, to avoid harming consumer health. Most countries have stipulated the maximum residue limit (MRL) of ABZ in animal products [8,9]. The Ministry of Agriculture and Rural Affairs of the People's Republic of China, the Codex Alimentarius Commission (CAC), and the European Union (EU) have stipulated that the MRL of ABZ in the muscles of all edible animals should not exceed 100 μg/kg, and the limit of the ABZ residue is based on the sum of ABZ and its three metabolites. This study adopted an MRL of 100 μg/kg in animal muscle as the standard, and developed a simultaneous detection method of ABZ and its three metabolites in pig and poultry muscle. Many quantitative methods, such as high-performance liquid chromatography-ultraviolet detection (HPLC-UV) [10,11], high-performance liquid chromatography-fluorescence detection (HPLC-FLD) [12][13][14][15][16], and liquid chromatography-mass spectrometry (LC-MS) [17][18][19][20][21], have been established for the simultaneous detection of ABZ and its metabolites in food-producing animals. Among these methods, the most widely used has been LC-MS because of its high sensitivity and accuracy. However, LC-MS instruments are very expensive, causing the detection cost to be relatively high, and high-purity reagents and highly trained operators are strictly required. The HPLC-FLD method is time-consuming and has low detection efficiency, and some HPLC-FLD studies could not quantify all metabolites of ABZ. The detection process was very difficult and complicated when gas chromatography-mass spectrometry methods were used to determine ABZ and its metabolites, due to the basic nature and low volatility of these substances. The sensitivity of HPLC-UV was also not high enough [17]. It is very important to develop accurate, stable, and low-cost detection methods. At present, to our knowledge, there is no report on the simultaneous detection of ABZ and its three metabolites in pig or poultry muscle by ultrahigh-performance liquid chromatography-fluorescence detection (UPLC-FLD). Therefore, this study intended to develop a rapid, easy, and reliable UPLC-FLD method Many quantitative methods, such as high-performance liquid chromatography-ultraviolet detection (HPLC-UV) [10,11], high-performance liquid chromatography-fluorescence detection (HPLC-FLD) [12][13][14][15][16], and liquid chromatography-mass spectrometry (LC-MS) [17][18][19][20][21], have been established for the simultaneous detection of ABZ and its metabolites in foodproducing animals. Among these methods, the most widely used has been LC-MS because of its high sensitivity and accuracy. However, LC-MS instruments are very expensive, causing the detection cost to be relatively high, and high-purity reagents and highly trained operators are strictly required. The HPLC-FLD method is time-consuming and has low detection efficiency, and some HPLC-FLD studies could not quantify all metabolites of ABZ. The detection process was very difficult and complicated when gas chromatography-mass spectrometry methods were used to determine ABZ and its metabolites, due to the basic nature and low volatility of these substances. The sensitivity of HPLC-UV was also not high enough [17]. It is very important to develop accurate, stable, and low-cost detection methods. At present, to our knowledge, there is no report on the simultaneous detection of ABZ and its three metabolites in pig or poultry muscle by ultrahigh-performance liquid chromatography-fluorescence detection (UPLC-FLD). Therefore, this study intended to develop a rapid, easy, and reliable UPLC-FLD method using liquid-liquid extraction (LLE) combined with solid-phase extraction (SPE) technology as a sample preparation technique to establish simultaneous detection of ABZ and its three metabolite residues in pig and poultry muscle (chicken, duck, goose).
Equipment
An ACQUITY UPLC System (Waters Corp, Milford, MA, USA) coupled to a fluorescence detector (Waters Corp, Milford, MA, USA) was used. Data acquisitions were performed by Empower 3 software.
Standard Solutions
Stock standard solutions of ABZ, ABZ-SO 2 , ABZ-SO and ABZ-2NH 2 -SO 2 were prepared by dissolving 2.5 mg of individual analytes in 25 mL of methanol to obtain a final concentration of 100.0 µg/mL. The stock standard solutions were stored stably for 3 months at −70 • C in the dark. The solutions required shaking for the analytes to fully dissolve, and sonication was applied to assist ABZ dissolution.
Blank Samples and Sample Preparation
This study was conducted in accordance with the ethics requirements of the official ethical committee of our university. Muscle samples were obtained from chickens, ducks, geese, and a pig, none of which had received any medication. The samples were stored at −34 • C in a refrigerator after they were homogenized with a tissue homogenizer.
Two grams of homogeneous blank muscle were accurately weighed in 50 mL polypropylene centrifuge tubes after thawing. Samples were spiked with 15 mL of ethyl acetate, vortexed for 5 min by a vortex oscillator, ultrasonically extracted for 5 min, and centrifuged for 10 min at 12,000 rpm (4 • C). The supernatant was transferred into a new centrifuge tube after centrifugation. Subsequently, the precipitate was again extracted as before, and the supernatant was combined with that from the first extraction. The supernatants were blown to near dryness with nitrogen. The residue sample was dissolved in 5 mL of mobile phase and spiked with 15 mL of n-hexane saturated with acetonitrile for degreasing. Then, the sample was vortexed for 2 min and centrifuged for 5 min at 6000 rpm (4 • C). Subsequently, the n-hexane layer was discarded, and the supernatant was collected. After OASIS ® PRiME HLB SPE cartridges (60 mg/3 mL, Waters Corp, USA) were conditioned by 3 mL of methanol and 3 mL of ultrapure water, the supernatant was purified by OASIS ® PRiME HLB SPE cartridges. Then, the samples were eluted sequentially with 3 mL of mobile phase and 3 mL of 20% ammoniated methanol (ammonia water:methanol = 2:8, v/v), and the eluate solution was collected into 10 mL centrifuge tubes. The eluate solution was blown to near dryness with nitrogen. The residue sample was reconstituted with 2.0 mL of mobile phase, and the mixture was vortexed at low speed for 1 min. Finally, the samples were passed through a hydrophilic PTFE type (13 mm × 0.22 µm) needle (Thermo Fisher Scientific, USA), and the filtrate was analyzed by UPLC-FLD.
UPLC-FLD Analysis
A UPLC system equipped with a fluorescence detector was employed. Compound separation was executed on an ACQUITY UPLC ® BEH C 18 (2.1 mm × 100 mm, 1.7 µm) chromatographic column (Waters Corp, USA) connected to a VanGuard TM BEH C18 (2.1 mm × 5 mm, 1.7 µm) guard column (Waters Corp, USA) with an appropriate column temperature of 35 • C. Mobile phase A was acetonitrile, and mobile phase B consisted of 0.2% formic acid aqueous solution containing 0.05% trimethylamine. Mobile phases A and B were degassed quickly for 20 min by an ultrasonic cleaner before they were used. Isocratic elution was utilized in the method, and the ratio of mobile phases A and B was 31:69 (v/v). The flow rate was 0.25 mL/min, and the injection volume was 5 µL. The total run time was 6 min. The excitation wavelength and emission wavelength of the four compounds were 286 and 335 nm, respectively.
Method Validation
The procedure of the UPLC-FLD method was validated by referring to the requirements of the EU [22]. The validation criteria involved sensitivity, linearity, recovery and precision.
Sensitivity
The sensitivity of the method was assessed in terms of the LODs and LOQs. When the signal-to-noise ratio (S/N) ≥ 3, the corresponding additive concentration was the LOD of the analytical method. When S/N ≥ 10, the corresponding additive concentration was the LOQ of the analytical method; at the same time, the concentration met the accuracy and precision requirements (recovery ≥ 70%, relative standard deviation (RSD) ≤ 20%). The working standard solutions of ABZ, ABZ-SO 2 , ABZ-SO and ABZ-2NH 2 -SO 2 were diluted stepwise with each blank muscle matrix extract solution to give solutions of different concentrations. Then, each concentration of the four compounds when S/N ≥ 3 and S/N ≥ 10 was detected 3 times by UPLC-FLD, and the average S/N was obtained. , and these solutions were then analyzed 5 times by the optimized UPLC-FLD method. Calibration curves were prepared using the peak areas as the ordinate (Y) and the concentrations of the working solutions as the abscissa (X).
Recovery and Precision
The recovery and precision were determined by analyzing blank muscle samples spiked with each compound in six replicates at the LOQ, 0.5 MRL, 1.0 MRL and 2.0 MRL levels. Recoveries were determined by comparing the chromatographic peak areas of extracted analytes from calibration curves of each compound. Precision, including intraday precision and interday precision, was evaluated by RSD. The intraday RSD was determined by analyzing blank muscle samples spiked with each compound in six replicates at the four concentration levels at different times on the same day, and the interday RSD was determined by analyzing blank muscle samples spiked with each compound in six replicates at the four concentration levels on different days.
Selection of a Chromatographic Column
The chromatographic column plays a key role in successful analysis and is responsible for separation in an LC system. Based on previous reports on the detection of ABZ and its metabolites in animal-derived foods, the most commonly used chromatographic column was the C 18 column [14,15]. In this study, HSS T3, CSH C 18 , BEH C 18 and Shield RP18 columns were tested.
Initially, regardless of flow rate, column temperature and other related parameters, the chromatographic peaks of ABZ-SO and ABZ-2NH 2 -SO 2 could not be separated well when the HSS T3 column or CSH C 18 column was used for analysis, and overlapping peaks were generated. Therefore, separation could not be achieved using the HSS T3 or CSH C 18 column. A series of sharp solvent peaks appeared when the Shield RP18 column was used, which resulted in the loss of the ABZ-SO peak. Finally, the ACQUITY UPLC ® BEH C 18 chromatographic column was selected as the analytical column for this study. The peak shapes of the four target compounds were sharp and symmetric, optimal separation was achieved, and the peaks were not interfered with by peaks of impurities in the sample tissue. The analysis time was 6 min, and detection could be completed quickly under optimized UPLC-FLD conditions and using a BEH C 18 column.
Optimization of the Mobile Phase
The composition of the mobile phase has a substantial influence on the separation and peak shape of analytes. In relevant studies on the simultaneous detection of ABZ and its three metabolites in animal-derived foods, at present, the water (containing a small amount of organic acid)-acetonitrile [17][18][19] mobile phase system and the water (containing a small amount of formic acid)-methanol [21][22][23] mobile phase system are frequently used. In this study, when 69% water (containing 0.2% formic acid and 31% methanol was used as the mobile phase, the chromatogram showed large fluctuations at 4-6 min in the gradient elution procedure, which caused the ABZ peak to be lost. When 69% water (containing 0.2% formic acid) and 31% acetonitrile was selected as the mobile phase, high responses and sharp peak shapes were obtained for ABZ, ABZ-SO 2 , ABZ-SO and ABZ-2NH 2 -SO 2 . Other concentrations of formic acid were not tested because the effect of 0.2% formic acid was good. Moreover, the separation effects of six different ratios of water (containing 0.2% formic acid)-acetonitrile mobile phase systems (75:25, 73:27, 71:29, 69:31, 67:33, 65:35, v/v) were investigated. When the mobile phase ratio was 69:31, the baseline chromatogram was stable, there were no solvent peaks, and the separation effects of each target were optimal. However, the chromatographic peak had a small tail, which may be because ABZ and its three metabolites are weakly alkaline substances and are prone to ionization. An appropriate amount of triethylamine added to mobile phase B can inhibit ionization, relieve the phenomenon of chromatographic peak tailing, and yield good chromatographic peaks. The addition of different amounts (0.01%, 0.03%, 0.05%, 0.07% and 0.09%) of triethylamine to relieve the chromatographic peak tailing phenomenon was also examined. The peaks of each target analyte were symmetrical when the concentration of triethylamine was 0.05%. Thus, the study used 0.2% formic acid aqueous solution (containing 0.05% triethylamine)acetonitrile (69:31, v/v) as the mobile phase for isocratic elution. Compared to the gradient elution procedures used in other methods, the isocratic elution procedure was simple and stable in this method.
Optimization of Column Temperature and Flow Rate
Proper column temperature can promote the separation effect of the target analytes and eliminate the influence of environmental temperature changes. In this study, the chromatographic peak shapes of four target compounds were tested at different chromatographic column temperatures (25 • C, 30 • C, 31 • C, 33 • C, 35 • C, 37 • C and 40 • C). First, different column temperatures (25 • C, 30 • C, 35 • C and 40 • C) were compared. When the column temperature was under 30 • C, the retention times of the target analytes were delayed. When the column temperature was set to 40 • C or higher, the peak shape of the target analytes improved, but separation of the ABZ-SO and ABZ-2NH 2 -SO 2 peaks was affected. Moreover, high temperatures caused irreversible damage to the column. The column temperatures of 31 • C, 33 • C, 35 • C, 37 • C and 39 • C were then compared. Ultimately, the retention times and separation of the target analytes were optimal when the column temperature was 35 • C. Therefore, a column temperature of 35 • C was selected in this study.
The column pressure of a chromatographic column increases with increasing flow rate. Decreasing the flow rate can prolong the analysis time. The effect of flow rate on the retention time of the target compounds was examined. Finally, the optimal flow rate was selected as 0.25 mL/min according to the separation and retention time of the target compounds.
Chromatograms and Determination of Detection Wavelengths
Good chromatograms (Figures 2-5) were obtained by using the optimized UPLC-FLD conditions. As shown in Figures 2-5, the retention times of ABZ, ABZ-SO 2 , ABZ-SO and ABZ-2NH 2 -SO 2 were approximately 5.00, 2.10, 1.45, and 1.20 min, respectively. The peak shapes of the target compounds were well separated and did not overlap. The chromatographic peaks of the target compounds did not exhibit tailing and were not affected by impurity peaks.
Fluorescence detectors are highly selective detectors that are only suitable for detecting compounds containing fluorophores. Detection wavelengths (excitation wavelengths and emission wavelengths) are necessary parameters for fluorescence detection and can directly affect the sensitivity and selectivity of detection. According to some previous LC-FLD studies [12][13][14][15][16], the excitation wavelengths and emission wavelengths of ABZ and its three metabolites are 290 and 330 nm and 290 and 320 nm, respectively. In this study, the detection wavelengths of the four target compounds were scanned by a multimode plate reader, and the scan results showed that the excitation wavelengths and emission wavelengths of ABZ, ABZ-SO 2 , ABZ-SO and ABZ-2NH 2 -SO 2 were 284.9 and 345.3 nm, 278.6 and 327.9 nm, 303.1 and 330.4 nm and 277.1 and 335.8 nm, respectively. According to the sensitivities and responses of each target, the excitation wavelengths and emission wavelengths of the four target compounds were chosen as 286 and 335 nm, respectively, after the different detection wavelengths of the four compounds were compared in this study. Fluorescence detectors are highly selective detectors that are only suitable for detecting compounds containing fluorophores. Detection wavelengths (excitation wavelengths and emission wavelengths) are necessary parameters for fluorescence detection and can directly affect the sensitivity and selectivity of detection. According to some previous LC-
Optimization of Sample Preparation
The choice of extractant for LLE was significant for the recovery of the four analytes from muscle samples and required both the ability to extract drugs and the ability to remove interfering substances in the sample matrix. ABZ and its three metabolites are all moderately polar molecules that are weakly alkaline. In previous studies, the most common solvent used to extract ABZ and its three metabolites in animal-derived foods was ethyl acetate [24,25] or acetonitrile [12,26]. In this research, the extraction effects of ethyl acetate and acetonitrile as extractants were compared. According to the recovery rate results, the extraction effect of acetonitrile was good. However, due to the high polarity of acetonitrile, more endogenous substances were extracted from pig and poultry muscle, which caused the chromatographic peaks of the target substances to be interfered with by the peaks of impurities during UPLC-FLD analysis. Ethyl acetate is low in toxicity and highly volatile, which could save time during concentration and improve detection efficiency. When ethyl acetate was used as the extractant, these target compounds could be effectively extracted from the sample matrix with a high recovery; additionally, the peaks of the target compounds were not interfered with by the peaks of impurities in the sample tissue during UPLC-FLD analysis. Therefore, ethyl acetate was used as the extractant in this study. Moreover, to improve the recovery rate, samples were extracted one, two, or three times, and the recoveries were compared. Two extractions could effectively increase the recovery rate compared with one extraction. Three extractions did not significantly improve the recovery rate and wasted more reagents. Finally, the samples were extracted two times to optimize the recovery rate and environmental protection.
Relevant literature has reported on different types of SPE cartridges and purification procedures to purify extracted samples, and MCX cartridges [18] and C 18 cartridges [20,21] are commonly used. In this study, a comprehensive comparison of the purification effects of Waters Oasis MCX, ProElutAL-A acidic Al 2 O 3 , Cleanert S C 18 and Waters PRIME HLB cartridges was performed. When Waters Oasis MCX, Cleanert S C 18 and acidic Al 2 O 3 cartridges were used for purification, the recoveries of ABZ and ABZ-2NH 2 -SO 2 were significantly lower, and the purification effect was poor. The purification effect of the Waters PRIME HLB cartridges was good. There was no interference with the peaks of target compounds, and the recovery was high when Waters PRIME HLB cartridges were used. In addition, an oil-free vacuum pump and an antifogging glass vacuum tank were equipped during the SPE extraction process, which could avoid cross-contamination and improve the extraction efficiency.
A filter membrane was used to filter impurities and thus protect the chromatographic analysis system. A polyvinylidene fluoride (PVDF) hydrophobic syringe filter membrane, a hydrophilic polytetrafluoroethylene (PTFE) syringe filter membrane, an aqueous phase syringe filter membrane and an organic phase nylon syringe filter membrane were examined. The results showed that when the aqueous phase syringe filter membrane and PVDF syringe filter membrane were used, severe peak tailing occurred. The organic phase nylon syringe filter membrane had a large pore size and could not effectively filter out impurities. In contrast, the hydrophilic PTFE syringe filter membrane exhibited better performance. Not only did this filter provide clean extracts, but the recovery of the four compounds was also higher when a PTFE syringe filter membrane was used. Therefore, the hydrophilic PTFE syringe filter membrane (13 mm × 0.22 µm) was ultimately selected.
Method Comparison
To date, various analytical methods have been used for the simultaneous detection of ABZ and its three metabolites. However, to our knowledge, not all studies included ABZ and all three metabolites in the detection process, and there are no reports using a UPLC-FLD method for the simultaneous detection of ABZ and its three metabolites in animalderived foods. Shaikh et al. [14] established an LC-FLD method for the determination of ABZ and its major metabolites in some fish muscles; the average recoveries were 67-94%, and the detection time was 20 min. Xu et al. [17] developed an LC-MS/MS method for determining mebendazole and its metabolites, ABZ and its metabolites, and levamisole in muscles of aquatic products. The recoveries of ABZ and its metabolites were 80.0-113.7%, with an RSD less than 10.0%, and the detection time was 10 min. Permana et al. [11] developed an HPLC-UV method determining ivermectin, ABZ, ABZ-SO 2 , ABZ-SO and doxycycline in rat plasma and organs. The recoveries of ABZ, ABZ-SO 2 and ABZ-SO were 79.81-97.29%, with an RSD less than 11.14%, and the detection time was 20 min. The UPLC-FLD method in this study could simultaneously detect the residues of ABZ and its three metabolites in pig and poultry muscle using LLE combined with SPE technology to extract target analytes and purify the sample matrix. The recoveries of ABZ and its three metabolites from all samples were 80.37-98.39%, with an RSD less than 6.20%, the LODs and LOQs were 0.2-3.8 µg/kg and 1.0-10.9 µg/kg, respectively, and the detection time was completed within 6 min. Compared with other methods, the most significant advantages of the UPLC-FLD method were a shorter detection time (6 min), which greatly improves work efficiency, and the RSDs were better than those of the LC-FLD, LC-MS/MS and HPLC-UV methods. Moreover, the extraction recovery and sensitivity were comparable to those of most methods. Thus, the study provides a new and advanced technology for the efficient and rapid detection of ABZ and its three metabolites in animal products.
Validation Results of the Method
We did not find a suitable internal standard before the beginning of the research. Therefore, we chose the external standard method for this study.
Sensitivity
The LODs and LOQs of ABZ and its three metabolites were examined in pig and poultry muscle in accordance with the method above. The determination results are listed in Table 1. As shown in Table 1, in pig and poultry muscle, the LOD of ABZ was 2.8-3.6 µg/kg, and the LOQ was 10.0-10.9 µg/kg; the LOD of ABZ-SO 2 was 0.2-0.4 µg/kg, and the LOQ was 1.0-1.5 µg/kg; the LOD of ABZ-SO was 2.4-3.8 µg/kg, and the LOQ was 8.0-9.7 µg/kg; and the LOD of ABZ-2NH 2 -SO 2 was 0.5-0.9 µg/kg, and the LOQ was 1.5-3.0 µg/kg.
Analyte
Adding
Real Sample Analysis
To evaluate the feasibility and applicability of the new method, 40 samples each of chicken, duck, goose and pig muscle were analyzed by the developed method and were purchased from a local supermarket. Only two chicken muscle samples were found to contain ABZ-2NH 2 -SO 2 residues (11.3 and 14.6 µg/kg), one pig muscle sample was found to contain ABZ-2NH 2 -SO 2 residues (11.8 µg/kg), and none of the samples exceeded the MRL of 100 µg/kg (EU standard). Therefore, the novel UPLC-FLD method is reliable for application according to a real sample analysis. | 2021-10-18T17:09:36.354Z | 2021-10-01T00:00:00.000 | {
"year": 2021,
"sha1": "580d95953d6635e175a96cb946d2d0a046eeea5e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2304-8158/10/10/2350/pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "f351328a415f49989b19e834dd3df7702e285530",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
158816403 | pes2o/s2orc | v3-fos-license | Education, the New Science, and Improvement in Seventeenth-Century Ireland
Bacon and Hartlib were leaders of an empirically based movement that contributed to the transformation of reform strategies in Ireland. While continuing to rely on changes to land use their claims to scientific objectivity helped alter the reformers goal from civilizing Ireland to improving it by relying on the new learning of the Hartlib Circle. The support of Oliver Cromwell provided unprecedented opportunities to apply theories of political economy on the blank slate of Ireland’s landscape.
Steven Shapin began his book on the New Learning by saying "there was no such thing as the Scientific Revolution, and this is a book about it." 1 In the same way, there was no humanist-style revolution in education in the seventeenthcentury, yet this is a paper about its implications. It is clear that the writings of Sir Francis Bacon and others introduced a novel way of gathering and using knowledge, fostering a new natural or mechanical philosophy that emphasized a new style of education aimed at the gentleman working in the service of the state-a disinterested thinker seeking knowledge for the sake of the common good and improvement of society. For Bacon, it was necessary "to try the whole thing anew upon a better plan, and to commence a total reconstruction of sciences, arts, and all human knowledge, raised upon the proper foundations." 2 More importantly, Bacon aspired to create new methods that would produce a reformed knowledge and superior education that would prove worthy to be valued and supported by the state and other institutions of society. In short, new methods along with new men educated outside the university-men trained to take-up knowledge as an instrument of civic utility-would make the learning a more effective arm of state power. 3 One point worth remembering without my truly it developing here is the importance of the general crisis in politics, society and culture following the Reformation; this widespread upheaval played a crucial role in contributing to the changing attitudes towards knowledge and in particular the relations between knowledge and social order. Indeed, a new method was viewed as a remedy for the problems of intellectual disorder, and the knowledge of nature in particular was considered deeply relevant to problems of order, not least because nature was understood to be a divinely authored book. Most importantly, a new method that relied on observation and reason was valued thanks to its claims to be objective and disinterested, making it a more valuable tool. Naturally, the capacity of knowledge to make valuable contributions to the world flows from an understanding that it was not produced to further particular human interests. With this in mind, the remainder of the essay will aim to look at how a few notable examples of the new types of knowledge and education were brought to bear on the strategies to order and to improve Ireland in the seventeenth century.
Bacon's essay Of Plantations and his involvement in the Articles of Plantation for Ulster are well known, but his influence is more apparent in the Cromwellian period and after. 4 Part of Bacon's plan for reform was to secure order through means approved of and implemented by the state. In the Novum Organon he called for novel educational methods along with organized and collective labor, all • 27 supported by the government. In the New Atlantis Bacon has Solomon's House, a combination of bureaucratic and intellectual organization, a proto-research and engineering institute clearly intended to serve the interests of an imperializing state: the work done there was presented as powering the expansionist drive of the kingdom and consequently received huge resources from the state. 5 In the event, several such intellectual associations or circles emerged under the early Stuarts, and Cromwell and others were to rely on their expertise when it came to dealing with the damnable question of Ireland. 6 The Irish Rebellion of 1641 destroyed much of the material changes to the landscape introduced by the plantations. But Cromwell's conquest and confiscations provided the opportunity to impose order and civility once again. Surely the first stage in the process--the comprehensive defeat of the natives--was now complete. The improvement of Ireland was now the responsibility of men associated with the new learning, led by Samuel Hartlib. 7 The Hartlib Circle had produced several tracts on husbandry and improvement in the 1630s and 1640s. Their association with the Independents in England meant that Cromwell's victory might permit the implementation of some of their schemes on the blank slate of Ireland. 8 A cursory examination of some of the writings and proposals for the improvement of Ireland produced by these new scientists is intended to support the three suggestions offered here: that the new learning's views on economic improvement were informed by Renaissance and Classical assumptions about husbandry and cultivation; secondly, that they believed a civilized or enlightened society was rooted in private property, permanent dwellings, and enclosed fields; 9 and finally that Irish hostility to, and destruction of, these markers of civilized identity convinced them that Irish pastoral society had to be replaced by a settled, civilized one before economic improvement could take place. 10 The opportunities offered by Cromwell's new dispensation in Ireland were like manna for the advocates of the new learning. The search for precise empirical knowledge of Ireland had begun before the civil war but was given new life by the Hartlib Circle. The shiring, mapping, and surveying of Ireland was pushed forward with renewed vigor, with Elizabethan servants producing a plethora of discoveries, accounts, descriptions, and views of Ireland. 11 The earliest attempt to create accurate maps of lands beyond the pale was undertaken with the support of Sir William Cecil, though the constant disappearance of the cartographers limited the effectiveness of this project. 12 Most importantly, one of the first expositions on the necessity "Of Plantations" was written by the father of the new science, Francis Bacon himself. But one of the distinguishing features of the new baconian science was its desire to gain control of the natural environment through improvement and technology. This endeavor served both spiritual and utilitarian purposes. Hartlibians accepted the puritan view of fallen man existing in an unfriendly environment, a mindset which convinced them that the godly man must labor to "correct" or improve nature as a means of atonement. 13 Many of the adventurers who came to Ireland considered the pastoral society of the Irish as the perfect example of ungodly sloth and sin: an indolent people living passively on the earth, taking only what nature offered. The new devotees of reason and improvement believed this passivity blurred the distinction between man and beast. For them, Ireland was an island of savages and barbarians, a nation overdue for a sharp dose of physical and moral improvement. But before this could be undertaken, a reliable description of Ireland based on empirical observations and the latest technical knowledge was required.
Thanks to the efforts and influence of Hartlib just such an effort was sponsored by the Council of State, resulting in William Petty's Down Survey, the most thorough account of Irish topography ever attempted. Petty drew upon the most advanced scientific and technical methods available in the seventeenth century, relying heavily on members of the Hartlib group. 14 In addition to the instruc- • 29 tions supplied by Petty, the surveyors were given standardized tools and training, a survey course by Miles Symner at Trinity, and instructions to ensure consistency and accuracy in their endeavors. 15 The empirical approach, the modern instruments, the recording of data in uniform folios provided by Petty and placing the findings "down" on a map, are all examples of the new scientific skills advocated by the Hartlib Circle and the new education. But if the skills and methods were new and improved, the goals of the modern scientists displayed a considerable continuity with the Renaissance adventurers who had preceded them in Ireland. Petty joined a chorus of previous reformers who condemned the primitive and impermanent architecture of the natives as a symbol of the indolent and unsettled lifestyle. For this reason, building with brick or stone and mortar was urged: sturdy dwellings were an important sign of the adoption of the new learning and the English style of order. 16 After 1555, the outpouring of husbandry manuals and agricultural treatises inspired by the translation of the Foure Bokes of Husbandry helped to establish the relationship between civility, gentility, and husbandry. This further inspired the belief that the socio-economic conditions in Ireland were the root of all evil there. The Hartlibians further believed that agricultural and spiritual reform were linked. The efficient husbandman was a common metaphor for puritans, and in 1652 Dury & Hartlib published The reformed Spiritual Husbandman. The next year saw Ralph Austen's, A treatise on fruit trees. together with the spiritual use of an orchard. held forth in divers similitudes between natural and spiritual fruit trees. 17 These ideas come from Bacon's idea that a great co-operative effort to marshal empirical knowledge would help restore man's dominion over nature which had been sacrificed at the Fall. 18 This interventionist attitude toward nature justified the construction of an improved, man-made landscape. So before the Irish could embrace the spiritual reforms on offer, Ireland would have to be remade as an ordered society based on well-bounded agricultural plots and solid masonry. Conveniently enough, the precise technical survey being taken down would also help to identify the best land available.
Prominent among Petty's Instructions for surveying and admeasuring the Lands in Ireland was the command that "distinction be made betweene the profitable and unprofitable parts... [and] whether the profitable be arable, meadow, or pasture." 19 Reforms and improvement were genuinely desired by some, while opportunism, ambition and the search for profit drove others. Yet they all agreed that the mobile pastoral society of the native Irish could not coexist with the tenant-based, individual agricultural units of the arriving settlers. Within years Petty would offer a solution for native idleness through a public works project which would simultaneously give Ireland a more civilized, modern landscape. The "spare hands" of the idle, "(besides the making of Bridges, Harbors, Rivers, High-ways,) are able to plant as many Fruit and Timber Trees and also Quick-set Hedges as being grown up, would distinguish the Bounds of Lands, beautifie the Countrey, shade and shelter Cattel, furnish Wood, Fuel, Timber and fruit, in a better manner than ever was yet known in Ireland or England." 20 Beyond their concern for idle hands, these new scientific ideals remained tied to notions of civility, culture, and cultivation. Significantly, more and more of the improvements were directed almost exclusively at Irish land, the people becoming increasingly invisible as nature was emphasized according to the latest educational ideals. Ireland's transition from barbaric to civilized continued to look like the transformation of an open pastoral landscape into a divided agricultural one. At the heart of all of this would be gardens and parks as evidence of having tamed a wild and menacing environment. Thus, new crops, new methods, selected livestock, land drainage, reclamation, manuring, emparking, and enclosing with hedges, ditches, palings, and walls and massive deforestations would join with the new towns to proclaim the distinctive (and superior) values of the newcomers. 21 An early example of the novel educational ideas was provided by Samuel Hartlib in 1641. 22 His ostensibly utopian Description of the Famous Kingdome of MACARIA drew upon the models of Bacon as well as Roman ideas about colonies but was in fact a thinly-veiled practical program for the improvement of newly acquired lands. Applying Roman colonial theory to individuals, Hartlib reported that in Macaria, if any man holdeth more land than he is able to improve to the utmost, he shall be admonished . . . and if he doth not amend his husbandry within a yeares space, there is a penalty set upon him, which is yearly doubled till his lands be forfeited, and he banished out of the Kingdom." 23 The failure to "improve" your land, or in this case, to farm it in the English manner, 19 • 31 was here established as a justification for dispossession. MACARIA was offered as an Example to other Nations with councils and tax policies directed squarely towards the scientific improvement of the natural world. Beyond its Great Council there were five "under-councils" for "Husbandry, Fishing, Trade by Land, Trade by Sea, and for new Plantations," just the sort of state-sponsored institutions recommended by Bacon. For Hartlib and his followers, husbandry was the basis of plenty and prosperity, and therefore essential to supporting the number of people necessary for trade and commerce: "except Husbandry be improved, the Industrie of Trading . . . can neither be advanced or profitably upheld." 24 This conviction was common to the new political economists, William Petty in particular. In order to expedite the spread of economic improvements in MACARIA, "the twentieth part of every mans goods that dieth shall be employed about the IMPROVING of lands, and making High-wayes faire, and bridges over rivers; by which means the whole Kingdome is become like a fruitful Garden." 25 The use of confiscation in probate was meant to maximize the number of acres under the plough, thereby laying the foundation for economic improvements. But plantations in Ireland over the last century had made clear that not all Irish acres were created equal. As a result, before Hartlib's Macarian example could be followed, an exact reckoning of Ireland's land and resources was imperative. The obsession here with agriculture and husbandry was not at all new, though thanks to the new educational ideas the empirical and technical methods used to gather information about Ireland certainly were. 26 But despite the advanced techniques of the new science, the economic improvements planned were predicated on the need to eliminate the pastoral society of the natives. Throughout these years, the proper use of land was taken as the key signifier of civility. 27 Hartlib found an excellent source for the information desired in the unfinished work of Gerard Boate. The first volume of his Naturall History of Ireland was published posthumously, with a dedication by Hartlib, who tried for years to gather additional material for later volumes by circulating a list of "Interrogatories," but the responses were never sufficient to complete the work. He still felt the need to publish the first part as "a True and ample Description of the several ways of Manuring and Improving" Ireland. Beyond scientific benefits and 24. Dedication to Cromwell and Fleetwood, Commander in the common good of Ireland, the title page freely admitted that the work was intended "for the benefit of the Adventurers and Planters therein." 28 The Naturall History provides clear evidence of the link between education, agriculture, improvement, and cultural differences exposed by the imperial process. Boate wrote in the aftermath of the Irish Rebellion of 1641, and that experience surely informs his entire work. Interestingly enough, the History also demonstrates how readily the Irish identified the ways in which agricultural improvements threatened their society and culture. Indeed, nothing outraged Boate more than the Irish rejection of the civility on offer and their determination to sweep it away in all its manifestations. 29 Boate was convinced that husbandry was the surest path to the perfection of earthly benefits, "and in the management thereof by way of Trading." Hartlib agreed with this in his Dedication, telling Cromwell and Fleetwood that he knew "nothing more useful than to have the knowledge of the Naturall History of each Nation advanced . . . otherwise there can be no Industrie used towards the improvement and Husbandry thereof." 30 But for Hartlib as for Boate the improvements in Husbandry were very much a part of the civilizing and improving process central to imperialism. Indeed, one of the subjects of the later volumes was to have been the "great paines taken by the English, ever since the Conquest, to civilize [the Irish], and to improve the Countrie." 31 For centuries civilizing Ireland had been equated with pacifying Ireland. But by the mid-seventeenth century civility was firmly associated with cultivation, and the introduction of cultivated plots seen as the essential building blocks of an ordered, rational society that justified imperial expansion. 32 The fact that conversion of confiscated pasture land into enclosed fields led to considerable personal gain only proved that the instruments of providence were reliably rewarded. 33 For advocates of the new science, God manifested himself in nature and in human Learning, and it was one of the great failings of the Irish that they allowed so much of their country to lay watery and waste "through their carelessness, wherby all or most of the Bogs at first were caused." 34 This state of affairs was particularly galling to a Dutchman like Boate. He insisted that the excessive moisture which plagued Ireland could be reduced "by the industry of men, if the country being once inhabited throughout by a civill Nation." 35 • 33 may have been a fanciful idea in the 1650s, but the point to notice is the belief in the ability of "civil men" to make Ireland into a productive and wealthy nation. Boate's descriptions of clearing fields, building roads, erecting fences, and draining bogs were intended as proof that the English were "the introducers of all good things in Ireland." More importantly, Boate recognized that the key indicators of economic improvement and civility in Ireland were "building, planting, hedging, and the like, but chiefly with [new] kinds of manuring." 36 By the 1650s the ideas of the new science had provided considerable credibility for an ideology which saw landscapes and land use as a means of distinction, a means of expressing the values of a society. Furthermore, an ordered and settled landscape was a marked sign of the improved conditions in Ireland. In this case the English value being expressed is the valorization of order, particularly an order that serves as an assertion of imperial authority. 37 Similarly, the alteration of a landscape may be viewed as a process that validates and legitimates power or as the method people use to transform the natural world into cultural realms of meaning. As a result, agricultural improvements and changes in the land become important sites where cultures and values are expressed, but also contested.
Accordingly, the native inhabitants were well aware of the significance of the "improvements" which were appearing throughout the country. Castles, forts, and walled towns were the most glaring examples of the new civilized identity being implanted. But these sturdy defensive structures were by no means the sole expression of the new and improved order being constructed in Ireland. The fields, fences, hedges, ditches, and walls were easily recognized as the borders and barriers they were meant to be. It should come as no surprise that the rebellion of 1641 witnessed widespread assaults on these monuments to civility and improvement. Boate complained bitterly of Irish ingratitude, reporting English improvements "for which that brutish nation from time to time hath rewarded them with unthankfullness, hatred, and envy, and lately with a horrible and bloody conspiracie, tending to their utter destruction." 38 In John Temple's account of the Irish rebellion he acknowledges the native desire to destroy in symbolic fashion everything that reminded them of the English presence in the country. Thereby communicating their hostility fo the cultural markers being planted in the landscape. The lamentations about these attacks reveal how accurately the Irish identified the signs produced by the new science and education: all the civility and good things by them introduced amongst that wild Nation; and consequently in most places they did not only demolish the houses built by the English, the Gardens and Enclosures made by them, the Orchards and Hedges by them planted, but destroyed whole drove and flocks of ENGLISH Cowes and Sheep. 39 The evidence for similar attacks on the material aspects of agricultural improvements is common in the years of relative tranquility from 1610-40. It seems that both sides were well aware of the role which land use and material culture played in distinguishing their incompatible societies. The tidy inclosed plots proved regular and easy targets, tokens of change that could be could be demeaned on a daily basis. One early improver complained bitterly about an annoying habit of his neighbors: • 35 flourish. The civilized example of husbandry and permanent dwellings was the best way to "transmute" the Irish.
As early as 1641, John Temple mistakenly assumed that "these people of late times were so much civilized by their cohabitation with the English . . . [that] many Irish, especially of the better sort, have taken up the English language, apparel, and decent manner of living in their private houses." 42 Apparently he was soon to be disabused of this notion. The attacks in 1641 on English towns, buildings, boundaries, and all enclosed lands had demonstrated the natives' hostility to the improved landscape.
Cromwell's transplantations to Connaught proved one more failed solution; and by the 1670s it was a commonplace that the pastoral society of the "wild Irish" was the number one obstacle to the improvements desired. Petty's scientific observations convinced him that forcing the Irish to live like the English would simultaneously eliminate their barbarous culture, provide jobs, and create the wealth necessary to sustain industry. Petty felt the government could "create jobs for Idle hands and wealth through building of 168,000 small Stone-wall Houses, with chimneys, Doors, Windowes, Gardens and Orchards, ditch'd and quicksteed; instead of the lamentable Sties now in use… The planting 5 Millions of Fruit-Treees...The planting 3 Millions of Timber-Trees upon the Bounds and Meers of every [plot]." 43 By forcing the Irish to live in the English manner, Petty believed they would inevitably be transmuted into the civilized farmers who would live in stone houses and generate the wealth necessary for advancing Irish trade. In the end, Petty even proposed that single Irish women be forced to marry English men, and raise their families in the English manner, for "when the Language of the Children shall be English, and the whole Oeconomy of the Family English, viz. Diet, Apparel, &c the Transmutation will be very easy and quick." Imposing English husbandry, language, law, architecture, and culture on Ireland was accepted as the necessary precursors to introducing order and civility. For William Petty economic improvement was more reliable than "all Military means of setling and securing Ireland in peace and plenty." 44 Economic improvements-in the form of a settled agricultural society-were the key to the emergence of a civilized and enlightened Ireland. Since civility needed to precede religious conversion, economic improvements offered the Irish a most desirable combination: material gain in tandem with spiritual salvation.
Here again we see the new learning linking economic improvement and a civilized identity to the elimination of the existing Irish customs and society. individuals, there were tremendous gains to be had from the economic improvements proposed. But for the government the improvements and civility desired were a means to remove the mobile, pastoral, society which had resisted English offers of peace and order for five centuries. Advocates of the new education and learning offered economic improvement as the means to create a new, enlightened identity for Ireland. But the emphasis on cultivation and agriculture meant that the Irish people were reduced a secondary concern. The appropriate social and economic attitudes toward land and settlement would have to come first. With the natural world properly subdued, the population could choose between husbandry or Connaught. | 2019-05-20T13:06:50.630Z | 2018-12-18T00:00:00.000 | {
"year": 2018,
"sha1": "365eacc0e9a32c9901de1db82ae7b767c576c6ab",
"oa_license": "CCBYNCSA",
"oa_url": "https://journals.openedition.org/etudesirlandaises/pdf/5598",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "7e382f87e724dbc38ef5fabb4012e3fa80c7c4ba",
"s2fieldsofstudy": [
"History",
"Education"
],
"extfieldsofstudy": [
"Political Science"
]
} |
246983728 | pes2o/s2orc | v3-fos-license | Endpoint Sobolev bounds for fractional Hardy–Littlewood maximal operators
Let 0<α<d\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$0<\alpha <d$$\end{document} and 1≤p<d/α\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1\le p<d/\alpha $$\end{document}. We present a proof that for all f∈W1,p(Rd)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f\in W^{1,p}({\mathbb {R}}^d)$$\end{document} both the centered and the uncentered Hardy–Littlewood fractional maximal operator Mαf\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathrm {M}}_\alpha f$$\end{document} are weakly differentiable and ‖∇Mαf‖p∗≤Cd,α,p‖∇f‖p,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \Vert \nabla {\mathrm {M}}_\alpha f\Vert _{p^*} \le C_{d,\alpha ,p} \Vert \nabla f\Vert _p , $$\end{document} where p∗=(p-1-α/d)-1.\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ p^* = (p^{-1}-\alpha /d)^{-1} . $$\end{document} In particular it covers the endpoint case p=1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=1$$\end{document} for 0<α<1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$0<\alpha <1$$\end{document} where the bound was previously unknown. For p=1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=1$$\end{document} we can replace W1,1(Rd)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$W^{1,1}({\mathbb {R}}^d)$$\end{document} by BV(Rd)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm {BV}({\mathbb {R}}^d)$$\end{document}. The ingredients used are a pointwise estimate for the gradient of the fractional maximal function, the layer cake formula, a Vitali type argument, a reduction from balls to dyadic cubes, the coarea formula, a relative isoperimetric inequality and an earlier established result for α=0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha =0$$\end{document} in the dyadic setting. We use that for α>0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha >0$$\end{document} the fractional maximal function does not use certain small balls. For α=0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha =0$$\end{document} the proof collapses.
Introduction
For f ∈ L 1 loc (R d ) and a ball or cube B, we denote The centered Hardy-Littlewood maximal function is defined by where the supremum is taken over all balls that contain x. The regularity of a maximal operator was first studied by Kinnunen in 1997. He proved in [18] that for each p > 1 and f ∈ W 1, p (R d ) the bound holds for M = M c . Formula (1.1) also holds for M = M. This implies that both Hardy-Littlewood maximal operators are bounded on Sobolev spaces with p > 1. His proof does not apply for p = 1. Note that unless f = 0 also M f 1 ≤ C d,1 f 1 fails since M f is not in L 1 (R d ). In [16] Hajłasz and Onninen asked whether formula (1.1) also holds for p = 1 for the centered Hardy-Littlewood maximal operator. This question has become a well known problem for various maximal operators and there has been lots of research on this topic. So far it has mostly remained unanswered, but there has been some progress. For the uncentered maximal function and d = 1 it has been proved in [28] by Tanaka and later in [22] by Kurka for the centered Hardy-Littlewood maximal function. The proof for the centered maximal function turned out to be much more complicated. Aldaz and Pérez Lázaro obtained in [3] the sharp improvement ∇ M f L 1 (R) ≤ ∇ f L 1 (R) of Tanaka's result.
For the uncentered Hardy-Littlewood maximal function Hajłasz's and Onninen's question already also has a positive answer for all dimensions d in several special cases. For radial functions Luiro proved it in [24], for block decreasing functions Aldaz and Pérez Lázaro proved it in [2] and for characteristic functions the author proved it in [30]. As a first step towards weak differentiability, Hajłasz and Malý proved in [15] that for f ∈ L 1 (R d ) the centered Hardy-Littlewood maximal function is approximately differentiable. In [1] Aldaz et al. proved bounds on the modulus of continuity for all dimensions. A related question is whether the maximal operator is a continuous operator. Luiro proved in [23] that for p > 1 the uncentered maximal operator is continuous on W 1, p (R d ). There is ongoing research for the endpoint case p = 1. For example Carneiro et al. proved in [11] that f → ∇ M f is continuous W 1,1 (R) → L 1 (R) and in [14] González-Riquelme and Kosz recently improved this to continuity on BV. Carneiro et al. proved in [8] that for radial functions f , the operator f → ∇ M f is continuous as a map W 1, The regularity of maximal operators has also been studied for other maximal operators and on other spaces. We focus on the endpoint p = 1. In [12] Carneiro and Svaiter and in [7] Carneiro and González-Riquelme investigated maximal convolution operators M associated to certain partial differential equations. Analogous to the Hardy-Littlewood maximal operator they proved In [9] Carneiro and Hughes proved ∇M f l 1 (Z d ) ≤ C d f l 1 (Z d ) for centered and uncentered discrete maximal operators. This bound does not hold on R d , but because in the discrete setting we have In [21] Kinnunen and Tuominen proved the boundedness of a discrete maximal operator in the metric Hajłasz Sobolev space M 1,1 . In [27] Pérez et al. proved the boundedness of certain convolution maximal operators on Hardy-Sobolev spacesḢ 1, p for a sharp range of exponents, including p = 1. In [29] the author proved var M d f ≤ C d var f for the dyadic maximal operator for all dimensions d.
For 0 ≤ α ≤ d the centered fractional Hardy-Littlewood maximal function is defined by For a ball B we denote the radius of B by r (B). The uncentered fractional Hardy-Littlewood maximal function is defined by where the supremum is taken over all balls that contain x. Note that M α does not make much sense for α > d. For α = 0 it is the Hardy-Littlewood maximal function. The following is the fractional version of formula (1.1).
where the constant C d,α, p depends only on d, α and p. In the endpoint p = 1 we can replace for almost every x ∈ R d , and then concluded formula (1.2) from the L ( p −1 −α/d) −1boundedness of M α , which fails for p = 1. Another result by Kinnunen and Saksman in [20] is that for all In [10] Carneiro and Madrid used this, the L d/(d−α) -boundedness of M α−1 , and Sobolev embedding to concluded formula (1.2). All of this also works for the uncentered fractional maximal function M α . The strategy fails for α < 1.
Our main result is the extension of formula (1.2) to the endpoint p = 1 for 0 < α < 1 which has been an open problem. Our proof of Theorem 1.1 also works for 1 ≤ α ≤ d, and further extends to 1 ≤ p < ∞, 0 < α ≤ d/ p. We present the proof for this range of parameters here, since it also smoothens out the blowup of the constants for p → 1 which occurs in the previous proof for p > 1. Note that interpolation is not immediately available for results on the gradient level. Our approach fails for α = 0. The corner point α = 0, p = 1 is the earlier mentioned question by Hajłasz and Onninen and remains open. Similarly to Carneiro and Madrid, we begin the proof with a pointwise estimate |∇M α f (x)| ≤ (d − α)M α,−1 f (x) which holds for all 0 < α < d for bounded functions. We estimate M α,−1 f in Theorem 1.2 and from that conclude Theorem 1.1.
For the centered fractional maximal function define where r is the largest radius such that M c α f (x) = r α f B(x,r ) and for the uncentered fractional maximal function define Then for almost every x ∈ R d the sets B c α (x) and B α (x) are nonempty, i.e. the supremum in the definition of the maximal function is attained in a largest ball B with x ∈ B, see Lemma 2.2.
For β ∈ R with −1 ≤ α + β < d this allows us to define the following maximal functions for almost every x ∈ R d . Note that also for the centered version the supremum is all balls B ∈ B c α whose closure contains x, not only over those centered in x. Theorem 1.2 Let 1 ≤ p < ∞ and 0 < α < d and β ∈ R with 0 ≤ α + β + 1 < d/ p and We prove Theorem 1.2 in Sect. 4. There had also been progress on 0 < α ≤ 1 similarly as for the Hardy-Littlewood maximal operator. For the uncentered fractional maximal function Carneiro and Madrid proved Theorem 1.1 for d = 1 in [10], and Luiro proved Theorem 1.1 for radial functions in [25]. Beltran and Madrid transferred Luiros result to the centered fractional maximal function in [5]. In [6] Beltran et al. proved Theorem 1.1 for d ≥ 2 and a centered maximal operator that only uses balls with lacunary radius and for maximal operators with respect to smooth kernels. The next step after boundedness is continuity of the gradient of the fractional maximal operator, as it implies boundedness, but doesn't follow from it. In [4,26] Beltran and Madrid already proved it for the uncentered fractional maximal operator in the cases where the boundedness is known.
For a dyadic cube Q we denote by l(Q) the sidelength of Q. The fractional dyadic maximal function is defined by where the supremum is taken over all dyadic cubes that contain x. The dyadic maximal operator has enjoyed a bit less attention than its continuous counterparts, such as the centered and the uncentered Hardy-Littlewood maximal operator. The dyadic maximal operator is different in the sense that formula (1.2) only holds for α = 0, p = 1 and only in the variation sense, for which formula (1.2) has been proved in [29]. But for any other α and p formula (1.2) fails because ∇M d α f is not a Sobolev function. We can however prove Theorem 1.4, the dyadic analog of Theorem 1.2. For α ≥ 0 and a function f ∈ L 1 (R d ) define Q α to be the set of all cubes Q such that for all dyadic cubes P Q we have l(P) α f P < l(Q) α f Q .
Remark 1.3
In the uncentered setting one could also define B α in a similar way as Q α .
Then Our main result in the dyadic setting is the following.
where the constant C d,α, p depends only on d, α and p. In the endpoint p = 1 we can replace The endpoint result for p = ∞ holds true as well.
In Sect. 2.2 we conclude Theorem 1.4 from Theorem 1.5, and in Sect. 3 we prove Theorem 1.5.
Theorem 1.4 holds with the dyadic version of E and Theorem 1.5 where the sum on the left hand side is over any subset Q ⊂ Q α and the integral on the right is over {cQ : Q ∈ Q}. These localized results directly follow from the same proof as the global results, if one keeps track of the balls and cubes which are being dealt with. The respective localized version of Theorem 1.1 can be proven if one has Lemma 2.4 without the differentiability assumption. Then in the reduction of Theorem 1.1 to Theorem 1.2 one could apply Theorem 1.2 to the same function f and Q α for which one is showing Theorem 1.1, bypassing the approximation step and therefore preserving the locality of Theorem 1.2. This is in contrast to the actual local fractional maximal operator, for whom Theorem 1.1 fails by [17,Example 4.2], which works for α > 0. However if α = 0 and p > 1 then the local fractional maximal operator is again bounded due to [19], and by [30] for α = 0 and p = 1 and characteristic functions.
Dyadic cubes are much easier to deal with than balls, but the dyadic version still serves as a model case for the continuous versions since both versions share many properties. This can be observed in [30], where we proved var M 0 1 E ≤ C d var 1 E for the dyadic maximal operator and the uncentered Hardy-Littlewood maximal operator. The proof for the dyadic maximal operator is much shorter, but the same proof idea also works for the uncentered maximal operator. Also in this paper a part of the proof of Theorem 1.4 for the dyadic maximal operator is used also in the proof of Theorem 1.2 for the Hardy-Littlewood maximal operator.
The plan for the proof of Theorem 1.1 is the following. For simplicity we write it down for p = 1.
where σ d is the volume of the d-dimensional unit ball. In the second step we apply the layer cake formula, in the forth step we pass from a union of arbitrary balls to very disjoint balls B α with a Vitali covering argument, in the eighth step we pass from those balls to comparable dyadic cubes and as the last step use a result from the dyadic setting.
We use α > 0 as follows. Let A be a ball and B(x, r ) be a smaller ball that inter- We use this in the forth step of the proof strategy above. We use a dyadic version of these alternatives in last step. Note that for α = 0 optimal balls B of arbitrarily different sizes with similar values f B can intersect.
Remark 1.9
There is a proof of Theorem 1.1 which has a structure parallel to the one presented above, but three steps are replaced. The estimate Otherwise it is identical to the proof presented in this paper.
Note that formally Remark 1. 10 In the proof of Theorems 1.1, 1.2, 1.5 and 1.4 we do not a priori need f ∈ However from ∇ f p < ∞ we can then anyways conclude f ∈ L p (R d ) by Sobolev embedding.
Reformulation
In order to avoid writing absolute values, we consider only nonnegative functions f for the rest of the paper. We can still conclude Theorems 1.1, 1.2, 1.4 and 1.5 for signed functions because Such a function comes with a measure μ and a function ν : → R d that has |ν| = 1 μ-a.e. such that for all ϕ ∈ C 1 c ( ; R d ) we have u div ϕ = ϕν dμ.
If ∇u is a locally integrable function we call u weakly differentiable. Proof By the weak compactness of L p (R d ) there is a subsequence, for simplicity also denoted by (u n ) n , and a Then
Hardy-Littlewood maximal operator
In this section we reduce Theorem 1. Proof We formulate one proof that works both for the centered and uncentered fractional maximal operator. Let (B n ) n a sequence of balls with x ∈ B n and Assume there is a subsequence (n k ) k with r (B n k ) → 0. Then f B n k → f (x) and thus lim sup Hence there is a subsequence (n k ) k such that r (B n k ) converges to some value r ∈ (0, ∞). We can conclude that there is a ball B with x ∈ B and r (B) = r and B n k f → B f . So we have A similar argument shows that there exist a largest ball B for which sup B x r (B) α f B is attained.
Then M α f is locally Lipschitz.
Proof If f = 0 then the statement is obvious, so consider f = 0. Let B be a ball. Then there is a ball A ⊃ B with f A > 0. Define 2r (A)) .
That means that on B the maximal function M α f is the supremum over all functions σ −1 d r α−d f * 1 B(z,r ) with r ≥ r 0 and z such that 0 ∈ B(z, r ) for the uncentered operator and z = 0 for the centered. Those convolutions are weakly differentiable with The following has essentially already been observed in [17,20,23,25].
Proof Let B(z, r ) ∈ B α (x) and let e be a unit vector. Note that for the centered maximal operator we have z = x. Then for all h > 0 we have x + he ∈ B(z, r + h). Thus Now we reduce Theorem 1.1 to Theorem 1.2. We prove Theorem 1.2 in Sect. 4.
Proof of Theorem 1.1 For each n ∈ N define a cutoff function ϕ n by Then |∇ϕ n (x)| = 2 −n 1 2 n ≤|x|≤2 n+1 and thus Then since M α f n → M α f pointwise from below, M α f n converges to M α f in L 1 loc (R d ). So from Lemma 2.1 it follows that By Lemma 2.3 we have that M α f n is weakly differentiable and differentiable almost everywhere, so that by Lemmas 2.2, 2.4 and Theorem 1.2 we have which by formula (2.2) converges to ∇ f p . for n → ∞. For the endpoint p = d/α the proof works the same.
Dyadic maximal operator
In this section we reduce Theorem 1.4 to Theorem 1.5. Let 1 ≤ p < d/α and f ∈ L p (R d ).
Recall that we denote by Q α the set of all dyadic cubes Q such that for every dyadic cube ball P Q we have l(P) α f P < l(Q) α f Q . For x ∈ R d , we denote by Q α (x) the set of dyadic cubes Q with x ∈ Q and
Lemma 2.5 Let 1 ≤ p < d/α and f ∈ L p (R d ) and x ∈ R d be a Lebesgue point of f . Then Q α (x) contains a dyadic cube Q x with
and that cube also belongs to Q α .
Proof Let (Q n ) n be a sequence of cubes with l(Q n ) → ∞. Then Let (Q n ) n be a sequence of cubes with l(Q n ) → 0. Then since f Q n → f (x) and l(Q n ) α → 0, we have l(Q n ) α f Q → 0. Thus since for each k there are at most 2 d many cubes Q with l(Q) = 2 k and whose closure contains x, the supremum has to be attained for a finite set of cubes from which we can select the largest. Now we reduce Theorem 1.4 to Theorem 1.5. We prove Theorem 1.5 in Sect. 3.
Proof of Theorem 1.4 By Lemma 2.5, M d α,β f is defined almost everywhere. We have where the last step follows from Theorem 1.5. In the endpoint case we have by Theorem 1.5
Dyadic maximal operator
In this section we prove Theorem 1.5. For a measurable set E ⊂ R d we define the measure theoretic boundary by We denote the topological boundary by ∂ E. As in [29,30], our approach to the variation is the coarea formula rather then the definition of the variation, see for example [13,Theorem 5.9]. Then Lemma 3.2 Let f ∈ L 1 loc (R d ) be weakly differentiable and U ⊂ R d and λ 0 < λ 1 . Then Recall also the relative isoperimetric inequality for cubes.
Lemma 3.3 Let Q be a cube and E be a measurable set. Then
We will use a result from the case α = 0. For a subset Q ⊂ Q 0 and Q ∈ Q 0 , we denote Proposition 3.4 Let 1 ≤ p < ∞ and f ∈ L 1 loc (R d ) and |∇ f | ∈ L p (R d ). Then for every set For p = 1 it also holds with ∇ f 1 replaced by var f .
Remark 3.5
We have that α < β implies Q β ⊂ Q α . This is because for l(Q) < l(P), l(Q) α f Q > l(P) α f P becomes a stronger estimate the larger α becomes.
By Remark 3.5 we can apply Proposition 3.4 to Q = Q α . For p = 1 Proposition 3.4 is Proposition 2.5 in [29]. For the proof for all p ≥ 1 we follow the strategy in [29]. In particular we use the following result. For Q ∈ Q 0 we denotē Lemma 3.6 (Corollary 3.3 in [29]) Let f ∈ L 1 loc (R d ). Then for every Q ∈ Q 0 we have Note that f P >λ P implies P ∈ Q 0 .
Proof of Proposition 3.4 By Lemmas 3.3, 3.2 we have for each P ∈ Q 0 and P Q that We note that for any Q ∈ Q we have λ Q Q ≥ λ ∅ Q and use Lemma 3.6. Then we apply the above calculation, Hölder's inequality and use that (λ P , f P ) and (λ Q , f Q ) are disjoint for P Q, For p = 1 with var f instead of ∇ f 1 we do not use Lemma 3.2 or Hölder's inequality, but interchange the order of summation first and then apply Lemma 3.1.
For a dyadic cube Q denote by prt(Q) the dyadic parent cube of Q. Lemma 3.7 Let 1 ≤ p < d/α and f ∈ L p (R d ) and let ε > 0. Then there is a subsetQ α of Q α such that for each Q ∈ Q α with l(Q) α f Q > ε there is a P ∈Q α with Q ⊂ prt(P) and f Q ≤ 2 d f P . Furthermore for any two Q, P ∈Q α one of the following holds. sup{ f P : P ∈ Q, P Q} ≤ 2 −ε f Q For the endpoint p = ∞ let Q ∈ Q α . Then we use f prt(Q) ≤ 2 −α f Q and copy the proof of the endpoint in Lemma 3.8 with p(Q) = prt(Q) and ε = 1/2.
Hardy-Littlewood maximal operator
In this section we prove Theorem 1.2.
Making the balls disjoint
and let ε > 0. Then for any c 1 ≥ 2, c 2 ≥ 1 there is a set of balls B ⊂ B α such that for two balls which means that r (B) is bounded by .
Then for all B ∈ B 0 we have that r (B) α f B is uniformly bounded. Inductively define a sequence of balls as follows. For B 0 , . . . , B k−1 already defined choose a ball B k ∈ B 0 such that c 1 B k is disjoint from c 1 B 0 , . . . , c 1 B k−1 and which attains at least half of Then B 0 ⊂ B 0 . We proceed by induction. For each n ∈ N define as above greedily select a sequence B n of balls B ∈ B n with almost maximal f B such that for every already selected A ∈ B n we have c 1 B ∩ c 1 A = ∅, and define Note that we have B n ⊂ B n . Finally set B = B 0 ∪ B 1 ∪ . . .. For A ∈ B, we denote Let λ > ε and B ∈ B α with r (B) α+β f B > λ. Then there is an n with B ∈ B n , and hence a A ∈ B n with B ∈ U A,λ . Let A ∈ B and B(x, r ) ∈ U A,λ . Then A ⊂ B(x, 5c 1 r (A)). Since r ∈ R α f (x) we have r α f B(x,r ) ≥ (5c 1 r (A)) α f B(x,5c 1 r (A)) ≥ (5c 1 r (A)) α (5c 1 ) −d f A which implies r ≥ (5c 1 ) 1−d/α r (A)( f A / f B(x,r ) ) 1/α ≥ (5c 1 ) 1−d/α c 1/α 2 r (A). Since r ≤ 5c 1 r (A) it follows that Acknowledgements I would like to thank my supervisor, Juha Kinnunen, for all of his support. I would like to thank Olli Saari for introducing me to this problem. I am also thankful for the discussions with Juha Kinnunen, Panu Lahti and Olli Saari who made me aware of a version of the coarea formula [13,Theorem 3.11], which was used in the first draft of the proof, and for discussions with David Beltran, Cristian González-Riquelme and Jose Madrid, in particular about the centered fractional maximal operator. The author has been supported by the Vilho, Yrjö and Kalle Väisälä Foundation of the Finnish Academy of Science and Letters.
Funding Open Access funding provided by Aalto University.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. | 2022-02-20T16:21:22.925Z | 2022-02-18T00:00:00.000 | {
"year": 2022,
"sha1": "3a1a93043a63bf10bef7e2a85a2f6a8b33e86f41",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00209-022-02969-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "843a209d722adb92aea1c6b2a11ef6f5bd1da1e2",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
} |
264310074 | pes2o/s2orc | v3-fos-license | Impact of COVID-19 Pandemic on Treatment Management and Clinical Outcome of Aneurysmal Subarachnoid Hemorrhage – A Single-Center Experience
Background Previous studies reported decreased volumes of acute stroke admissions during the COVID-19 pandemic. We aimed to examine whether aneurysmal subarachnoid hemorrhage (aSAH) volumes demonstrated similar declines in our department. Furthermore, the impact of the pandemic on disease progression should be analyzed. Methods We conducted a retrospective study in the neurosurgical department of the university hospital Frankfurt including patients with the diagnosis of aSAH during the first year of the COVID pandemic. One year cumulative volume for aSAH hospitalization procedures was compared to the year before (03/2020 – 02/2021 vs. 03/2019 – 02/2020) and the last 5 pre-COVID-pandemic years (2015-2020). All relevant patient characteristics concerning family history, disease history, clinical condition at admission, active/past COVID-infection, treatment management, complications, and outcome were analyzed. Results Compared to the 84 hospital admissions during the pre-pandemic years, the number of aSAH hospitalizations (n = 56) declined during the pandemic without reaching significance. No significant difference in the analyzed patient characteristics including clinical condition at onset, treatment, complications, and outcome, between 56 patients with aSAH admitted during the COVID pandemic and the treated patients in the last 5 years in the pre-COVID period were found. In our multivariable analysis, we detected young age (p < 0.05; OR 4.2) and no existence of early hydrocephalus (p < 0.05; OR 0.13) as important factors for a favorable outcome (mRS ≤ 0–2) after aSAH during the COVID pandemic. A past COVID-infection was detected in young patients suffering from aSAH (Age <50years, p < 0.05; OR 10.5) with an increased rate of cerebral vasospasm after aSAH onset (p < 0.05; OR 26). Nevertheless, past COVID-infection did not reach significance as a high-risk factor for unfavorable outcomes. Conclusion There was a relative decrease in the number of patients with aSAH during the COVID-19 pandemic. Despite the extremely different conditions of hospitalization, there was no impairing significant effect on the treatment and outcome of admitted patients with aSAH. A past COVID infection seemed to be an irrelevant limiting factor concerning favorable outcomes.
Background: Previous studies reported decreased volumes of acute stroke admissions during the COVID-19 pandemic. We aimed to examine whether aneurysmal subarachnoid hemorrhage (aSAH) volumes demonstrated similar declines in our department. Furthermore, the impact of the pandemic on disease progression should be analyzed.
Methods:
We conducted a retrospective study in the neurosurgical department of the university hospital Frankfurt including patients with the diagnosis of aSAH during the first year of the COVID pandemic. One year cumulative volume for aSAH hospitalization procedures was compared to the year before (03/2020 -02/2021 vs. 03/2019 -02/2020) and the last 5 pre-COVID-pandemic years (2015-2020). All relevant patient characteristics concerning family history, disease history, clinical condition at admission, active/past COVID-infection, treatment management, complications, and outcome were analyzed.
Results: Compared to the 84 hospital admissions during the pre-pandemic years, the number of aSAH hospitalizations (n = 56) declined during the pandemic without reaching significance. No significant difference in the analyzed patient characteristics including clinical condition at onset, treatment, complications, and outcome, between 56 patients with aSAH admitted during the COVID pandemic and the treated patients in the last 5 years in the pre-COVID period were found. In our multivariable analysis, we detected young age (p < 0.05; OR 4.2) and no existence of early hydrocephalus (p < 0.05; OR 0.13) as important factors for a favorable outcome (mRS≤0-2) after aSAH during the COVID pandemic. A past COVID-infection was detected in young patients suffering from aSAH (Age < 50years, p < 0.05; OR 10.5) with an increased rate of cerebral vasospasm after aSAH onset (p < 0.05; OR 26). Nevertheless, past COVID-infection did not reach significance as a high-risk factor for unfavorable outcomes.
INTRODUCTION
The COVID-19 pandemic caused significant disruption to established care paths, including acute conditions such as aneurysmal subarachnoid hemorrhage (SAH) (aSAH). In March 2020, Germany has declared a state of emergency and implemented restrictions on business, travel, and social life.
In the case of the healthcare system, the main aim was to accommodate the care of critically ill patients with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection worldwide by rationing of resources (1). Therefore, emergency medical services were changed to conserve resources and mitigate infection risk to patients and their providers (2,3).
Additionally, cardiac symptoms were reported in some of the most seriously ill COVID-19 patients (4). There was growing recognition that COVID-19 may not be confined to the respiratory system. Further side effects as neuro-invasive features leading to devastating ischemic or hemorrhagic events were related to SARS-CoV-2 infection (5). A raised incidence of strokes in younger patients with COVID-19 has been reported (5).
However, there is a lack of information on the impact of the COVID-19 pandemic on SAH admissions. Early regional or single-center reports from Paris (6) and Toronto (7) suggest a decrease in the number of patients suffering from aSAH, whereas no changes were seen in Berlin (8). Here, we evaluated the impact of COVID-19 on the volumes of aSAH admissions and treatments for patients with ruptured intracranial aneurysms during the peak of the pandemic, defined from March 2020 to February 2021. Here, we aim to quantify the impact of the COVID-19 pandemic on the timing of presentation of all patients with acute aSAH, their treatment management, and progression of the disease.
Clinical Data
We retrospectively analyzed our institutional database of consecutive patients suffering aSAH during the first COVID pandemic year (03/20-02/21), the pre-pandemic year (03/19-02/20), and the last 5 years pre-COVID pandemic (2015-02/2020) as the baseline data. aSAH was defined as a spontaneous nontraumatic hemorrhage into the subarachnoid space with evidence of at least an intracranial aneurysm.
The retrospective clinical study was approved by the local ethics committee of the Goethe University and was performed in accordance with the related guidelines of the regional ethics committee in Frankfurt am Main, Germany. Because of the retrospective design, patient consent form was not needed.
All patients with aSAH, diagnosed by SAH pattern on CTscan, or confirmed by lumbar puncture, underwent cerebral digital subtraction angiography (DSA) to rule out intracranial vascular bleeding sources. Patients in whom the bleeding source was detected to be an aneurysm were included in this study. Patients with ruptured aneurysms were treated by surgical or endovascular aneurysm occlusion based on an interdisciplinary consensus. Patients not receiving treatment at SAH onset because of advanced brain injury or without clinical follow-up 6 months post-SAH were excluded.
All parameters relevant to this analysis, including patient characteristics such as age, gender, relevant previous diseases in the history, anticoagulation, nicotine abuse, positive family history, active/ past COVID infection, clinical condition at admission using Hunt & Hess classification, bleeding pattern described by Fisher score, treatment procedure "clip vs. coil, " the occurrence of cerebral vasospasm (CVS), delayed cerebral infarction (DCI), delayed ischemic neurological deficit (DIND), early hydrocephalus, shunt implantation, and finally, clinical outcome (modified Rankin Scale: mRS after 6 months; favorable outcome mRS 0-2 vs. unfavorable outcome mRS 3-6) in patients with aSAH, were recorded.
Early hydrocephalus was defined as the external ventricular drain (EVD) placement during the first 24 h after admission because of the neurological decline of patients with aSAH.
The CVS diagnosis in patients was uniformly defined by CT-Angiography as radiological imaging on the basis of arterial narrowing (e.g., >33% mild CVS, 33-66% moderate CVS, >66% severe CVS).
To assess the modified Rankin Scale 6 months after aSAH onset, which is integrated into our standard clinical evaluation forms for aSAH patients, we followed up the course of the disease in all outpatients and still hospitalized patients. In the case of still hospitalized aSAH patients, follow-up data were assessed using unified evaluation forms by rehabilitation clinics and/or nursing homes.
Statistical Analysis
Data analyses were performed using the computer software package (IBM SPSS, version 22, IBM SPSS Inc.; Armonk, NY, USA). Analyses of categorical variables were done by using Fisher's exact test and GraphPad Prism (8.0, GraphPad Software Inc., USA). Normally distributed variables were expressed as mean values with SD and analyzed using a two-tailed ttest. Multivariable analysis was carried out to identify factors for outcome improvement among above-mentioned aspects. A p < 0.05 was considered statistically significant.
Neuro-Emergency Admission and
Characteristics of ASAH Patients Before vs. During COVID-19 Pandemic From March 2019 to February 2020, designated the pre-COVID period, 84 patients presented with a final diagnosis of aSAH. From March 2020 to February 2021, defined as the COVID period in our analysis, 56 patients with aSAH were available. The number of cases recorded by every month of the year showed a general decreasing trend in 2020. In this first year of the COVID period, we observed a drop in the absolute number of cases per calendar month without resulting in a statistical significance ( Figures 1A,B, Table 1).
In spite of the decrease in the number of admitted aSAH patients during the first year of COVID pandemic, there is still a parallel distribution as demonstrated in Figures 1 A,B.
Comparing patient characteristics of admitted patients in the first year of the pandemic with the pre-pandemic year, there is no significant difference according to aSAH risk factors such as age, female gender, hypertension, nicotine abuse, positive family history, and anticoagulation, respectively. There was additionally no significant difference between the severity of presentation, described as Hunt & Hess classification and Fisher blood pattern score, and/or treatment options. Furthermore, a significantly higher risk for aSAH followed complications, such as CVS, DCI, DIND, early hydrocephalus, and related shunt implantation, could not be detected. However, there was no sign of a significantly higher risk for unfavorable outcomes in the case of patients with aSAH admitted during the COVID pandemic ( Table 1).
As the neurosurgical Department of University Hospital Frankfurt offering intensive care unit (ICU)-emergency capacity for the Frankfurt Rhein-Main area (ca. 147755 km 2 ) (9) the mean time interval between aSAH onset and presentation to our vascular center was 0.77 ± 1.3 days for the pre-COVID, and 0.94 ± 1.45 days for the COVID period, without reaching a statistical significance. Furthermore, the distance of residence of the admitted patients from our center in km showed no significant change ( Table 1).
To underline the above-mentioned results, we analyzed patient characteristics of included 56 patients during the COVID pandemic in comparison with admitted aSAH patients to our department during the last 5 pre-COVID years (03/2015-02/2020).
In consistency with the previous results, we obtained no significant difference in the added analyzed factors as patient characteristics including clinical condition at onset, treatment, complications, and outcome, between patients with aSAH admitted during the COVID pandemic and treated patients with aSAH in the last 5 years at our department as pre-COVID period.
Comparing a bigger patient cohort as the pre-pandemic group, there was a single significant higher rate of nicotine abuse in patients suffering aSAH during the pandemic year ( Table 2).
A total of 8 patients (pre-COVID-pandemic cohort: n = 7; COVID-pandemic cohort: n = 1) could not be included in this study because of hypoxic brain damage diagnosed on the first CT scan at admission as a result of a high volume of SAH and/or followed out of hospital resuscitation. In addition, we excluded 12 patients (pre-COVID-pandemic cohort: n = 10; COVID-pandemic cohort: n = 2) because of missing follow-up data.
Characteristics of ASAH Patients With Previous COVID-19 Infection
Among the 56 treated patients with aSAH admitted during the COVID-pandemic to our department, we detected a total of 6 patients (11%) with a past COVID infection in their history. Active infection was not proven in these patients using the SARS-CoV-2-Gen PCR test at admission. None of these patients was vaccinated at the time of hospital admission. Comparing the patients with past COVID-infection with the non-COVID-infection group suffering from aSAH, we established a significantly higher rate of young patients with post-COVID-infection (Age < 50 years, p < 0.05; OR 10.5). Additionally, patients with aSAH with past COVID-infection had an increased rate of CVS during days 4-8 after aSAH onset (p < 0.05; OR 26). There was no significant delay in hospital admission in days in the case of these patients. Both patient groups had the same chance for favorable outcomes ( Table 3). None of the patients with past COVID-infection was impaired by respiratory dysfunction.
Using a multivariable analysis to verify significant factors for a favorable outcome (mRS ≤ 0-2) after aSAH during the COVID pandemic, we detected young age (p < 0.05; OR 4.2) and no existence of early hydrocephalus at the initial aSAH onset (p < 0.05; OR 0.13) as important factors. A past COVID-infection, which increased the rate of CVS post aSAH in Table 3, did not reach significance as a high-risk factor ( Table 4).
Impact of COVID-19 Pandemic on Treatment Management of ASAH Patients
Our neurosurgical department offers emergency neurosurgical coverage in Frankfurt and suburbs for a population of approximately 5.816,186 inhabitants (9).
The German Government introduced mitigation measurements in March 2020 to drastically limit social interactions and consequently virus diffusion. As noted in the result section, we described a decrease of acute aneurysmal SAH as a leading vascular center in the region.
Concerning the decreased number of admitted patients with aSAH, a robust association has been found between psychological stress and aneurysm rupture risk (10). The potential mechanisms behind the association between perceived stress and an increased risk of aSAH are complex and yet not fully understood. The overstimulation of the hypothalamus-pituitary-adrenal axis and increased release of cortisol are related as possible mechanisms. In addition, acute as well as chronic psychosocial stress is also associated with endothelial dysfunction (11). Several risk factors for rupture of intracranial aneurysms were defined, including a sudden increase in blood pressure, which may be stress-induced (12). The precited factors should probably explain an increase and not a decrease of SAH as we observed during clinical practice. The possible explanations to such epidemiological situation defined by a Parisian group (6) are decrease of people seeking medical help fearing to get infected, high load on healthcare system resulting in misdiagnosis, especially for headache as a common symptom of COVID infection (13), and some still unknown deaths of quarantined people. Bernat et al. (6) also described patients that experienced aneurysm rupture at risk of rebleeding as a fragile population of patients, who are part of the "collateral damages" of the COVID pandemic.
Experiencing a model of a central COVID ICU at the University Hospital of Frankfurt to treat all COVID infected patients and further discipline-specific non-COVID ICUs to take care of non-COVID emergencies, approximately all hospital transfer requests of patients with aSAH from external regional hospitals without neurosurgical department or Neuro-ICU capacity could be realized during the COVID period.
We also detected a decrease in the number of presented patients with aSAH in our department compared to the prepandemic year, which could be the result of the already abovementioned criteria: not seeking medical help, being afraid of infection, misdiagnosis, and the unknown number of deaths caused by undetected aSAH.
Furthermore, considering the number of admitted patients with aSAH during the last 5 pre-pandemic years, we noticed a variation in the number of patients suffering from aSAH during the years. Therefore, the small difference in the number of admitted patients during the pandemic vs. pre-pandemic year should not be the focus of our evaluation.
Nevertheless, all patients diagnosed with aSAH could be treated after transfer to our ICU being under standard therapy conditions. All these patients had no proof of active COVID and were not infectious. As already demonstrated, there was no significant difference in time-space between clinical onset and admission at our center, which is an important preventive factor in case of aneurysm rebleeding associated with poorer outcomes. In contrast, an association between the outbreak of the COVIDpandemic in the US with a delay in presentation of patients was reported repeatedly as a therapeutic limitation in case of patients with acute ischemic stroke, which had a strong negative effect on treatment management and neurological outcomes (14)(15)(16). This was already reported in a big international cohort in case of subarachnoid hemorrhage describing the decrease in SAH volumes, including the embolization of ruptured aneurysms, similar to reports of decreases in stroke admissions, intravenous thrombolysis, MT, and acute ST-elevation myocardial infarction (STEMI) activations during the COVID-19 pandemic (14,17,18).
Fortunately, there was no proof of significant delay in hospital admission in our department, providing patients from the same residence area as pre-COVID pandemic time. Furthermore, we could not detect any significant differences in clinical complications and outcomes after SAH comparing the pandemic SAH patient group with pre-pandemic treated ones. This aspect is the proof for the continuation of evidence-based treatment management during COVID-pandemic including immediate external ventricular drainage to decrease the high intracerebral pressure, DSA during the first 24 h after SAH onset, and treatment by clipping or coiling in interdisciplinary dialogue.
However, to care for the massive numbers of patients with COVID, many hospital systems and surgeons as our department were focused on conserving resources by limiting elective surgical procedures. During the peak of the pandemic in Western Europe, COVID-19 disrupted the practice of neurosurgeons and affected the decision-making in triaging neurosurgical cases. A majority of surgeons reported that all elective cases and clinics were re-scheduled (2,8,19).
This aspect encouraged the Neuro-ICU capacity to treat neurological emergencies such as subarachnoid hemorrhage.
Interestingly, there was a single significant higher rate of nicotine abuse in patients suffering aSAH during the pandemic year compared to the data from the last pre-pandemic years. Interpreting smoking as a misconduct could be related to the psychological stress caused by being quarantined as a result of the pandemic (20).
A total of 8 patients (pre-COVID-pandemic cohort: n = 7; COVID-pandemic cohort: n = 1) could not be included in this study because of hypoxic brain damage diagnosed on the first CT scan at admission as a result of a high volume of SAH and/or followed out of hospital resuscitation. In addition, we excluded 12 patients (pre-COVID-pandemic cohort: n = 10; COVIDpandemic cohort: n = 2) because of missing follow-up data. The aim of this analysis is to evaluate and compare the medical care and treatment of patients with aSAH before and during the COVID pandemic and the course of the disease, as well as the clinical outcome of these patients, including a small number of patients with incomplete target database, was not conducive to reaching the goal of this study.
Effect of Past COVID Infection on the Progression of ASAH
Detecting six aSAH patients who had a past mild COVID infection in the history without respiratory dysfunction, we compared the progression of the disease in these patients with the non-COVID SAH group treated during the same time-space. This study is not designed to establish causality between COVD-19 and cerebral aneurysm rupture; instead, we tried to describe and analyze the patient characteristics in patients with past COVID infections. Because of the high incidence of aneurysm ruptures in young patients in this series, the inflammatory response accompanying COVID-19 should be under consideration as a cause of premature rupture of preexisting cerebral aneurysms. Based on recent data, any relationship between SARS-CoV-2 infection and cerebral aneurysm rupture could possibly involve macrophagemediated production of interleukin-1β, interleukin-6, and tumor necrosis factor-α. Macrophages are highly implicated in both, aneurysmal rupture and COVID-19-related inflammations (21)(22)(23)(24).
The point that these patients had a significantly higher risk for CVS day 4-8 after SAH underlined the fact that macrophagemediated vasospasm could be encouraged by COVID-related inflammation (25). This remains, however, purely speculative and warrants additional investigation. A larger sample of patients with aSAH and measurement of inflammatory mediators would be especially valuable in defining the association between SARS-CoV-2 infection and aSAH.
On the other hand, the increased rate of CVS during day 4-8 after aSAH onset in patients with past COVID-infection (p < 0.05; OR 26) could be reasonable due to the higher number of patients with worse admission status and more often high Fisher grade.
Nevertheless, talking with the young patients with aSAH with post-COVID infection in this group, one should consider the higher risk of vasospasm in young patients after aSAH, which has already been described in previous datasets (26,27). Therefore, a premature evaluation of these sparse data sources should not mislead further clinical trials on this field.
However, using multivariate analysis to detect significant factors to reach a favorable outcome in our COVID pandemic group, past COVID infection was not proved as a limitation factor, but higher age > 50 years and early hydrocephalus. This also should be studied in further studies including a higher number of patients to obtain a relevant response to this question.
Study Limitations
The rapidly changing landscape of the COVID-19 pandemic, including the effects of social distancing, may not yet be captured in this dataset. We do not analyze the difference in referral patterns and patient flows between different centers in this study. We assume the efficiency of the stroke referral system of the center and stable conditions between the baseline and the COVID period.
Pointing to the limitations of the study, the retrospective design and the small number of detected patients with post-COVID SAH should be mentioned. In addition, detecting six patients suffering from aSAH after a COVID infection could not be sufficiently powered by statistical analysis to prove the clinical impact on these patients with aSAH. Nevertheless, a separated multivariable analysis of the small patient group with past COVID infection detected a tenuous significantly higher risk for CVS as a limiting factor during the progression of the disease, surprisingly without affecting the chance of favorable outcome ( Tables 3, 4
)
However, further prospective clinical trials should be spent on the epidemiological aspect and clinical impairment of COVIDpandemic in young patients affected by aneurysmal subarachnoid hemorrhage. The next steps should be designed as multicenter clinical trials.
CONCLUSION
These data demonstrate decreasing hospital admissions due to aneurysmal subarachnoid hemorrhage despite unlimited hospital resources for acute stroke care. We suggest that this may be a consequence of social distancing measures. Hence, raising public awareness is necessary to avoid serious healthcare and economic consequences of undiagnosed and untreated diseases. Nevertheless, this current work shows a successful model of health care management in the case of SAH as an acute neurological emergency without any disadvantage for the patients.
Over 1 year into the COVID-19 pandemic, the global community is constantly discovering sequelae of SARS-CoV-2 infection. In this cohort, we report six aSAH cases with past COVID-infection. These cases are one of the first reported patients of this specific phenomenon and they raise questions on a possible interaction between COVID infection and cerebral aneurysm rupture and clinical outcome after aSAH. The young age of detected patients and the high risk for CVS, deserve further basic research.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Ethic Committee, Medical Department, University Hospital Frankfurt, Goethe-University, Theodor-Stern-Kai 7., 60590 Frankfurt am Main, Germany. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements. | 2022-03-23T13:18:02.114Z | 2022-03-21T00:00:00.000 | {
"year": 2022,
"sha1": "ee83adc996a18fdac12e92673282964756a75c92",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2022.836422/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ee83adc996a18fdac12e92673282964756a75c92",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256490140 | pes2o/s2orc | v3-fos-license | Assessment of Agenda Setting With ACGME Milestones in Family Medicine Residents
Background and Objectives : Family medicine residents are scored via milestones created by the Accreditation Council for Graduate Medical Education (ACGME) on various clinical domains, including communication. Communication involves a resident’s ability to set an agenda, but this is rarely taught in formal education. Our study aimed to examine the relationship between ACGME Milestone achievement and ability to set a visit agenda, as measured by direct observation (DO) forms. Methods : We examined biannual (December, June) ACGME scores for family medicine residents at an academic institution from 2015-2020. Using faculty DO scores, we rated residents on six items corresponding to agenda setting. We used SpearmanandPearsoncorrelationsandtwo-samplepaired t teststoanalyzeresults. Results : We analyzed a total of 246 ACGME scores and 215 DO forms. For first-year residents, we found significant, positive associations between agenda-setting and the total Milestone score ( r (190)=.15, P =.034) in December, and in individual ( r (190)=.17, P =.020) and total communication scores ( r (186)=.16, P =.031), in June. However, for first-year residents, we found no significant correlations with communication scores in December or in the total milestone scores in June. We found significant progression for consecutive years in both communication milestones ( t =-15.06, P <0.001) and agenda setting ( t =-12.26, P <.001). Conclusions : The significant associations found in agenda setting with both ACGME total communication and Milestone scores for first-year residents only, suggests that agenda setting may be fundamental in early resident education.
INTRODUCTION
Medical educators are responsible for providing residents with timely, constructive feedback. The Accreditation Council for Graduate Medical Education (ACGME) Family Medicine Milestones are assessments of competencies organized around six core domains: patient care, medical knowledge, systemsbased practice, practice-based learning and improvement, professionalism, and communication. 1 Milestone ratings have been found to be a "viable multidimensional tool" to measure resident competence and progression. 2 Agenda setting involves setting expectations for the visit and organizing a comprehensive list of topics to be discussed between patient and provider, thereby reducing the number of unaddressed health concerns. 3 A provider's ability to proactively negotiate a visit agenda is a skill shown to improve deficiencies in communication, but is not always taught as part of standard medical education. 4 Therefore, communication is a specific area of focus for our resident direct observations (DO) of patient encounters.
DO increases the frequency of feedback, helps identify clinical deficiencies, increases resident confidence, and improves resident communication skills. 5 This study examined the relationship between ACGME communication milestone achievement and resident ability to set a visit agenda, as measured by DO forms.
METHODS
This study was approved by the Penn State College of Medicine Institutional Review Board.
Participants
Biannual (fall, spring) ACGME Milestone scores from all residents (n=56 residents) from July 2015-June 2020 were gathered as one of the data sources assessed based on the 2014 ACGME Family Medicine Milestones Table 1 . 1
Assessment and Procedures
DOs occurred in two Mid-Atlantic suburban clinics in an opposed residency program that is part of a large academic medical center. Information collected from the observation forms included assessment of behaviors in the following skill categories: medical interview, physical exam, assessment and plan, counseling skills/shared decision making, relationship skills, and organization and efficiency. Some categories contained subcategories. Faculty observed the entire encounter and circled, highlighted, or bolded any resident behaviors witnessed and struck through behaviors that could have been, but were not performed (Supplementary Figure 1).
Open-ended fields for additional comments were included in each section and at the end for overall goals. Faculty completed the electronic form immediately following the patient encounter, reviewed feedback with the resident prior to the end of the clinic session, and a copy of the form was provided to the resident and their advisor. The forms were uploaded into New Innovations management software and used as part of assessment review prior to and during each Clinical Competency Committee (CCC). The CCC evaluations are completed through committee (ie, residency faculty) consensus through review of multiple data sources (eg, In-Training Exam scores, numbers of procedures completed and logged, etc), including DO forms.
Statistical Analysis
We gave a rating to each resident as a percentage of the identified items. We calculated total ACGME scores to assess potential differences in overall communication, individual levels, and general performance. We used Spearman and Pearson correlations and two-sample paired t tests to analyze results by postgraduate year (PGY). We used Bonferroni corrections for multiple comparisons (n=24). We entered data into REDCap software 6 and analyzed using the R statistical program version 4.0.2 (R Foundation for Statistical Computing, Vienna, Austria).
RESULTS
We analyzed a total of 246 ACGME CCC evaluations and 215 DO forms from a 5-year period. Not all DO forms completed during this time period were submitted into the New Innovations system, making them inaccessible for this review. There was a significant, positive association between agenda setting and the total Milestone score (r(190)=.15, P=.034) when examining scores from the fall time frame (July 1-December 31, 2015-2020). In spring (January 1-June 30, 2015-2020), we identified significant positive associations between agenda-setting and the communication milestone, C-1 (r(190)=.17, P=.020), as well as total communication milestone scores (r(186)=.16, P=.031). These significant associations are only trends following adjustment for Bonferroni corrections (α now <.002). Overall, significant progression was seen for consecutive years in both communication milestones (t=-15.06, P<.001) and agenda-setting (t=-12.26, P<.001). A summary of all significant findings can be found in Table 2 .
DISCUSSION
Our study found agenda setting to be significantly associated with both total communication and total Milestone scores for PGY1s. Success in agenda-setting in the fall was associated with an increase in PGY1 total Milestone scores. We expected PGY1 improvement in all Milestones in the spring, due to the longer training period, but this was only true for the communication Milestone. Communication is a critical area for faculty to focus early in PGY1 training, to build confidence in resident-patient interactions, which are reflected in CCC assessments. 5 Agenda setting is a foundational competency impacting PGY1s. It may be that PGY1 Milestone scores on communication were more affected by agenda setting, when compared to PGY2s and PGY3s, because more senior residents have other communication factors contributing to and buffering their scores. Moreover, PGY1s who set an agenda appear to have greater success in communicating with their patients, as both resident and patient are more likely to be satisfied with topics covered during the encounter if topics were mutually agreed upon at the start. 3 Our study is limited by missing DO forms, use of correlations, and the potential for unknown confounders to affect the results of the study. Additionally, this study assessed resident Milestone scores in one residency program in Central Pennsylvania, limiting generalizability. Furthermore, there may be a recall bias due to latency in completion of the DO forms, as they were completed the end of the patient encounter. There is also a need for a follow-up study to analyze interrater reliability of agenda setting assessment. A more recent version of the ACGME Milestones was released in 2019, 7,8 where agenda setting and communication Milestones were fundamentally the same, however, in the new version these were assigned to a level 2 versus a level 3 assessment. These data may not reflect the changes associated with the newest version. | 2023-02-02T16:38:14.493Z | 2023-01-31T00:00:00.000 | {
"year": 2023,
"sha1": "f2affc16f195d8c1ea035ac0b5ad01fccd278ddc",
"oa_license": null,
"oa_url": "https://journals.stfm.org/media/5532/parascondo.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1315c50ebcc3340ad3cc39edf166b04e0020256a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
259296350 | pes2o/s2orc | v3-fos-license | Clinical predictive factors and prediction models for end‐stage renal disease in Chinese patients with type 2 diabetes mellitus
Dear Editor Diabetesmellitus (DM)has become a significant chronic condition that seriously affects human health.1 Nowadays, China has become the country with the largest number of DM patients worldwide, of which more than 90% are type 2 diabetes mellitus (T2DM).2 The increasing prevalence of DM exacerbates the incidence of end-stage renal disease (ESRD).3 T2DM-related ESRD not only reduces survival rate and health-related quality of life but also places a significant cost on patients as well as society.4–6 To identify clinical predictive factors and develop prediction models for ESRD risk in T2DM patients, we used the study population extracted from the China Renal Data System, a database containing the information of more than seven million patients attended at 19 hospitals in the Chinese mainland, as previously described.7 ESRD, including an eGFR of 15 mL/min/1.73 m2 or less, or the commencement of dialysis or kidney transplantation due to ESRD, was classified as the outcome. Eventually, clinical data of adult patients with T2DM were collected from 17 hospitals. Using a randomized approach, 55 824 patients with T2DM from 10 medical centers were included in the derivation cohort, and 25 745 patients from seven additional medical institutions were included for external validation. The patient selection flowchart is shown in Figure S1. After a median of 384 (123, 900) days of followup, there were 1,527 (2.74%) outcomes in the derivation cohort (n = 55,824). Table S1 summarizes the clinical features at baseline. Spearman correlation analysis was conducted to identify the correlation between continuous variables (Figure S2), and variables with higher average correlation (correlation ≥ 0.5) were removed. Univariate Cox regression analysis was used to select potential
Dear Editor
Diabetes mellitus (DM) has become a significant chronic condition that seriously affects human health. 1 Nowadays, China has become the country with the largest number of DM patients worldwide, of which more than 90% are type 2 diabetes mellitus (T2DM). 2 The increasing prevalence of DM exacerbates the incidence of end-stage renal disease (ESRD). 3 T2DM-related ESRD not only reduces survival rate and health-related quality of life but also places a significant cost on patients as well as society. [4][5][6] To identify clinical predictive factors and develop prediction models for ESRD risk in T2DM patients, we used the study population extracted from the China Renal Data System, a database containing the information of more than seven million patients attended at 19 hospitals in the Chinese mainland, as previously described. 7 ESRD, including an eGFR of 15 mL/min/1.73 m 2 or less, or the commencement of dialysis or kidney transplantation due to ESRD, was classified as the outcome. Eventually, clinical data of adult patients with T2DM were collected from 17 hospitals. Using a randomized approach, 55 824 patients with T2DM from 10 medical centers were included in the derivation cohort, and 25 745 patients from seven additional medical institutions were included for external validation. The patient selection flowchart is shown in Figure S1.
After a median of 384 (123, 900) days of followup, there were 1,527 (2.74%) outcomes in the derivation cohort (n = 55,824). Table S1 summarizes the clinical features at baseline. Spearman correlation analysis was conducted to identify the correlation between continuous variables ( Figure S2), and variables with higher average correlation (correlation ≥ 0.5) were removed. Univariate Cox regression analysis was used to select potential # Yueming Gao and Zhi Shang contributed equally to this work and share first authorship. Table S2. All potential predictors were, therefore, fitted into a multivariable Cox regression model, utilizing step-wise backward selection (p < 0.05). Ten clinical predictive factors, including age, hypertension, diabetes retinopathy (DR), hemoglobin (HGB), serum albumin (ALB), serum creatinine (Scr), serum uric acid, Low-density lipoprotein cholesterol (LDL-C), serum fibrinogen, and urinary protein were selected into the final model (Table S3). We constructed three clinical prediction models using various combinations of predictors selected by multivariable Cox regression (Table 1) Figure 1A). In addition, these models attained satisfactory calibration, as shown in Figure S3A-C. Internal validation using bootstrapping also achieved a robust discrimination, with an AUC of 0.914-0.927, as shown in Table 1.
In the external validation cohort (n = 25,745), during a median follow-up of 321 (90, 758) days, there were 1,084 (4.21%) outcomes. Table S4 presents the baseline clinical features. Based on the receiver-operating characteristic (ROC) curves, the prediction models achieved an AUC ranging from 0.868 to 0.882 ( Figure 1B). As seen in Figure S3D-F, these models also attained satisfactory calibration.
In conclusion, using a large multi-center retrospective cohort in the Chinese mainland, we identified 10 clinical predictive factors and developed models to predict ESRD in T2DM patients, which showed excellent prediction performance. To the best of our knowledge, we have established models to predict ESRD based on the largest population of T2DM patients in the Chinese Mainland. These prediction models were further provided as simple bedside tools, including a risk score and a nomogram, which could be extensively applied to assess T2DM patients' ESRD risk in clinical practice, to aid clinical decision-making and sensible resource allocation.
A C K N O W L E D G E M E N T S
The authors wish to thank the clinicians and healthcare professionals at the participating centers in the CRDS Study. Yongxiang Gao and the team of Digital Health China Technologies Co., LTD deserve special gratitude for their assistance with data extraction. This study was supported by the National Key Research and Development Program of China to Prof. Bicheng Liu as PI (grant number: 2018YFC1314000).
C O N F L I C T O F I N T E R E S T S TAT E M E N T
The authors declare no conflict of interest.
D ATA AVA I L A B I L I T Y S TAT E M E N T
The data that support the findings of this study are accessible from the corresponding author upon request. The data are not publicly available owing to ethical or privacy concerns. TA B L E 2 A risk score of model 1(full model).
Predictors Category Point
Age ( | 2023-07-01T06:16:09.876Z | 2023-06-29T00:00:00.000 | {
"year": 2023,
"sha1": "bff19cfb082393d3eca5163385e5c39311995ba5",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "b4bbc94d7845e75bb07205440eb2036b4434b573",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235425380 | pes2o/s2orc | v3-fos-license | Association of immune‐related pneumonitis with clinical benefit of anti‐programmed cell death‐1 monotherapy in advanced non‐small cell lung cancer
Abstract Background The association between the development of checkpoint inhibitor pneumonitis (CIP) with tumor response and survival has remained unclear so far. The aim of the present study was to evaluate the association between CIP and the clinical efficacy of anti‐programmed cell death‐1 antibody in patients with advanced non‐small cell lung cancer (NSCLC). Methods Between January 2016 and August 2019, 203 advanced NSCLC patients were administered with nivolumab or pembrolizumab. Comparisons were made between patients with and without CIP. We evaluated the time‐to‐treatment failure (TTF), progression‐free survival (PFS), and overall survival (OS). Results CIP was observed in 28 (14%) patients. CIP was associated with a longer PFS (18.9 months [95% confidence interval, CI: 8.7 months–not reached] vs. 3.9 months [95% CI: 3.4–5.1 months, p < 0.01]) and longer OS (27.4 [95% CI: 20.7 months–not reached] vs. 14.8 months [95% CI: 11.2–17.9 months, p = 0.003]). Most patients discontinued the immune checkpoint inhibitor (ICI) treatment when they developed CIP. Seven patients (25%) lived for more than 300 days from treatment discontinuation and did not show any long‐term tumor growth after treatment discontinuation. Conclusion CIP was associated with prolonged PFS and OS. Additionally, 25% of CIP patients did not show any tumor growth for long periods after treatment discontinuation. Careful management of CIP can help in obtaining the best clinical efficacy from anti‐PD‐1 antibody.
| INTRODUCTION
Lung cancer is the most common type of cancer-related death worldwide. 1 Recently, programmed cell death 1 (PD-1) and programmed cell death ligand 1 (PD-L1) inhibitors (alone or in combination) have been shown to result in higher survival rates than standard chemotherapy in patients with advanced NSCLC. [2][3][4][5][6][7][8] Administration of immune-checkpoint inhibitors (ICIs) is complicated by immune-related adverse events (irAEs) including pneumonitis, skin reactions, thyroid dysfunction, hepatitis, and infusion reaction; these differ from the adverse events of conventional systemic therapy. 9 Some studies have reported that the occurrence of irAEs is linked to the clinical efficacy of ICIs in patients with NSCLC. [10][11][12] However, it is unknown which irAE is particularly linked to the clinical benefit of ICIs.
Among individuals with irAEs, 5%-10% of the patients treated with ICIs developed checkpoint inhibitor pneumonitis (CIP), resulting in potentially serious toxicity. 13 Similar to other irAEs, low-grade CIP was found in most cases, and it improved with immunosuppressive therapy. However, severe CIP can lead to fatal respiratory failure. 14 The onset of CIP may be a reflection of the degree of immune activity, but it is currently unclear whether CIP development is an indicator of better antitumor response or clinical efficacy. We performed a retrospective study to investigate the association between CIP and the clinical efficacy of nivolumab or pembrolizumab in advanced NSCLC patients.
| Patients
We performed a single-institutional retrospective study of the medical records of patients with advanced NSCLC treated with either nivolumab or pembrolizumab between January 2016 and August 2019 at Sendai Kousei Hospital.
| Assessment
We analyzed CIP, skin reaction, infusion reaction, thyroid dysfunction, and hepatitis as irAEs. We defined irAEs as adverse events that require more frequent monitoring and may have an immunological basis requiring intervention with immunosuppression and/or endocrine replacement therapy. 15 Patients were assigned to two groups (with or without CIP), and we evaluated their time-to-treatment failure (TTF), progression-free survival (PFS), and overall survival (OS). The best tumor response was defined with reference to the Response Evaluation Criteria in Solid Tumors (Ver. 1.1). 16 We evaluated the clinical severity of irAE according to the Common Terminology Criteria for Adverse Events, version 4.0.
| Statistical analysis
Categorical variables were evaluated utilizing chi-square, Student's t-test, Mann-Whitney U tests, or Welch's t-test where appropriate. Survival outcome was estimated with the Kaplan-Meier curves and was compared between patient groups with the log-rank test. The relationship between patient variables and response was evaluated with univariate and multivariate logistic regression analyses. The Hazard ratios (HRs) were estimated using Cox proportional hazards.
All p values were two-sided, and those <0.05 were considered statistically significant.
Lay summary
Checkpoint inhibitor pneumonitis (CIP) is one of the immune-related adverse events (irAEs). To date, an association between CIP and better tumor response or clinical outcome remains unclear. In our study, CIP was associated with progression-free survival (PFS) and overall survival (OS). Also, 25% of CIP patients did not have tumor progression long after treatment discontinuation. Most importantly, appropriate management was performed at the time of CIP onset. We think that careful management of CIP might maximize the clinical benefit of nivolumab and pembrolizumab monotherapy. mutations were expressed in 21 (10%) patients. Thirty-eight (19%) patients had not been on any prior chemotherapy regimen, whereas 91 (45%), 37 (18%), and 37 (18%) patients had received 1, 2, or ≥3 courses of chemotherapy. PD-L1 was expressed in abundance (tumor proportion score ≥50%) in 51 (25%) patients, expressed at low levels (1% ≤ tumor proportion score <50%), absent (tumor proportion score <1%), and unknown in 152 (75%) patients. No patient had any active autoimmune disease and interstitial lung disease.
The ORR was significantly better in patients who developed CIP and skin reactions than in those without CIP (68% vs. 24%, p < 0.001 and 53% vs. 20%, p < 0.001, respectively) ( Table 2). Table 3 shows the predictors of ORR, PFS, and OS in the multivariate analysis. CIP was also an independent predictor of ORR, PFS, and OS (ORR odds ratio 9 Table 4. Of the 28 patients who had CIP, 7 developed Grade 3 or higher CIP, with good treatment responses (partial response or stable disease). All these patients with severe grade CIP were treated with steroid therapy for CIP, as indicated by the American Society of Clinical Oncology clinical practice guidelines. 18 The cryptogenic organizing pneumonia (COP) pattern was the most frequent in 19 of 28 cases (68%), including the COP +ground-glass opacity (GGO) pattern.
Regarding the outcomes of CIP, there were 24 cases of improvement, 2 cases of no change, 1 case that worsened, and 1 case of death. Patient No. 25 died due to CIP; his chest imaging showed worse findings, despite the administration of steroids. The mortality rate of all the CIP grades was 3.5%. A swimmer plot displaying the course of treatment for all CIP patients is shown in Figure 3. Most patients discontinued ICI treatment when they developed CIP. Treatment of CIP was conducted in 24 of 28 (86%) patients: five patients received intravenous steroid pulse therapy, and 19 patients received prednisolone 0.5-2 mg/kg therapy. CIP was improved or resolved in 24 patients. Focusing on the period from ICI treatment discontinuation to disease progression or death, seven patients lived for more than 300 days, and these patients did not show any long-term tumor growth after treatment discontinuation (Figure 3 and Table 4; patient numbers 1, 2, 3, 4, 5, 6, and 7).
| Overall result
In this study, we examined the correlation of CIP onset with tumor response and survival. A multivariate analysis revealed that CIP was associated with ORR, PFS, and OS. PFS and OS were longer in patients with CIP than in those without CIP, even though they had similar TTFs. To the best of our knowledge, this is the first report showing the development of CIP as an independent predictor of tumor response and survival in patients treated with anti-PD-1.
| IrAEs and tumor response
Some studies have reported that irAEs are related with better outcomes in melanomas. [19][20][21][22] There are similar reports on lung cancer. We reported that the development of any irAE is associated with a longer PFS in patients with advanced NSCLC treated with anti-PD-1, 10 and similar results were shown in other studies. 11,12,23 For each irAE, we also reported that the development of skin reactions was correlated with a better response to anti-PD-1 antibodies. 24 In addition, a past study reported that patients treated with pembrolizumab who developed immune-related thyroid dysfunction had a significantly longer OS than those who did not. 25 These findings have led to reports linking the development of any irAE or a specific irAE with clinical benefits; however, it is unknown which irAE is particularly associated with clinical benefit. In this study, we found that CIP was a significant independent predictor of clinical benefit among the irAEs by multivariate regression.
| Checkpoint inhibitor pneumonitis
CIP is a significant adverse event that may lead to discontinuation of treatment and mortality. Symptoms of CIP are variable and nonspecific; the most common symptoms are dyspnea and cough, while fever and chest pain are less common. Furthermore, one third of patients may be asymptomatic at the onset. 14 Therefore, members of the patient's care team need to have a high level of awareness, so that changes in the early stages of the disease are not missed. Overall, the incidence of CIP is estimated to be between 3% and 6%. 14 However, some reports showed a higher incidence between 13% and 19%, and our results are consistent with these reports. 23,[26][27][28] This difference in incidence may be due, in part, to the greater frequency of computed tomography (CT) in our routine clinical practice, which may have led to earlier detection, and to the inclusion of patients at potential risk for CIP who received PD-1 inhibitors outside of clinical trials.
Radiologic features of CIP were characterized into five subtypes: COP, GGO, interstitial lung disease, hypersensitivity, and pneumonitis not otherwise specified. 14 Viral pneumonia such as coronavirus disease, alveolar hemorrhage, and interstitial pneumonia also show non-specific CT patterns. We need to be careful in distinguishing these diseases from CIP. 29 In the group with CIP, TTF was shortened, but PFS and OS were prolonged. However, in a previous report, OS was significantly shortened in patients with CIP. 30 This difference may be in the PS variation. In our study, about 4% of all patients had PS 2-4, whereas they constituted 15.2% (26/170) of the sample size in a study by Fukihara et al. This difference in findings may be due to differences in patient characteristics. Treatment of CIP is considered to discontinue the suspect drug for pneumonitis. Patients with drug-related pneumonitis also need to be treated with immunosuppressive drugs, but the response to systemic steroid varies with anticancer therapy. 31,32 Some cases may be fatal despite receiving appropriate therapy (Table 4). In our study, the CIP mortality rate was 3.5% (n = 1). Previous studies have also revealed a CIP-related mortality rate ranging from 0% to 20%. 7,26,33 The CIP-related mortality in patients treated with ICIs may be lower than that in patients treated with molecular targeted therapy or conventional chemotherapy. 31,32 The different causes of pneumonitis, whether from ICI and other drugs, may have caused this difference. Pneumonitis after anticancer therapy may be caused directly by direct pulmonary toxicity or indirectly by activation of the body's inflammatory response. 34 Most conventional chemotherapy and molecular-targeted drugs directly injure airway epithelial cells, alveolar epithelial cells, and capillaries, resulting in non-reversible fibrosis. However, ICI treatment may lead to the over-activation of T cells that cause a reversible immune response in the lungs. 35 These observations suggest that anti-PD-1 antibody-induced pneumonitis might be manageable with immunosuppressive therapy. Figure 3 shows a swimmer plot of patients with CIP. Some patients with pneumonitis showed long-term recurrence after treatment discontinuation. A subset of patients who responded to anti-PD-1 in previous studies reported longterm clinical benefit even after discontinuation of therapy. 36 In this study, although OS in patients with CIP was longer than in those without CIP, the factors are not clear. This good outcome may be related to early diagnosis and treatment, which ensures proper management at a less severe stage. We have an irAE management team (frontline immunotherapy team [FIT]) that enables us to perform a thorough physical examination and to report and treat even mild symptoms as early as possible.
Presently, the mechanism by which the antitumor effect persists even after interruption of treatment with ICI has not been clarified. Osa et al. assessed the time of maximum duration of antibodies on T cells and the relationship between this duration and residual therapeutic effect or potential adverse events. They reported that the binding of nivolumab to memory T cells in the blood was detectable more than 20 weeks after the last dose, irrespective of the number of nivolumab doses or subsequent treatments. 37 This mechanism suggests that long-term antitumor effects may be sustained. Therefore, appropriate management of CIP may have a good and long therapeutic effect.
| Limitations
There are a few limitations to our study. The study had a small sample size, was a non-randomized, single-center cohort, retrospective study. Also, the expression level of PD-L1 was not measured because of the commercial unavailability of diagnostic kits at the beginning of the study in Japan. Hence, we were not able to sufficiently evaluate the therapeutic effect of PD-L1 expression. Recently, the combination of chemotherapy and immunotherapy has become the norm, reducing the chances of treating patients on anti-PD-L1 monotherapy. However, the results of this study may be useful in predicting the clinical efficacy of chemotherapy-immunotherapy combination therapy or immunotherapy combination therapy.
| CONCLUSION
In our study, CIP was associated with prolonged PFS and OS. Moreover, in 25% of patients with CIP, tumors did not grow long after treatment was discontinued. Most importantly, management was carried out at the time of CIP onset and we believe that careful CIP management, especially early detection and treatment, will ease the attainment of the maximum clinical efficacy from anti-PD-1 antibody monotherapy. | 2021-06-15T06:16:22.979Z | 2021-06-13T00:00:00.000 | {
"year": 2021,
"sha1": "d67d171e7db684eb27c29f5a4c65e202dca03fc5",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cam4.4045",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "68daf89b6e9ce61c2b9b105fc03685c3fa611753",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
222311736 | pes2o/s2orc | v3-fos-license | Comprehensive Analysis of Immunoinhibitors Identifies LGALS9 and TGFBR1 as Potential Prognostic Biomarkers for Pancreatic Cancer
Pancreatic cancer (PC) is one of the most deadly cancers worldwide. To uncover the unknown novel biomarker used to indicate early diagnosis and prognosis in the molecular therapeutic field of PC is extremely of importance. Accumulative evidences indicated that aberrant expression or activation of immunoinhibitors is a common phenomenon in malignances, and significant associations have been noted between immunoinhibitors and tumorigenesis or progression in a wide range of cancers. However, the expression patterns and exact roles of immunoinhibitors contributing to tumorigenesis and progression of pancreatic cancer (PC) have not yet been elucidated clearly. In this study, we investigated the distinct expression and prognostic value of immunoinhibitors in patients with PC by analyzing a series of databases, including TISIDB, GEPIA, cBioPortal, and Kaplan-Meier plotter database. The mRNA expression levels of IDO1, CSF1R, VTCN1, KDR, LGALS9, TGFBR1, TGFB1, IL10RB, and PVRL2 were found to be significantly upregulated in patients with PC. Aberrant expression of TGFBR1, VTCN1, and LGALS9 was found to be associated with the worse outcomes of patients with PC. Bioinformatics analysis demonstrated that LGALS9 was involved in regulating the type I interferon signaling pathway, interferon-gamma-mediated signaling pathway, RIG-I-like receptor signaling pathway, NF-kappa B signaling pathway, cytosolic DNA-sensing pathway, and TNF signaling pathway. And TGFB1 was related to mesoderm formation, cell matrix adhesion, TGF-beta signaling pathway, and Hippo signaling pathway. These results suggested that LGALS9 and TGFBR1 might serve as potential prognostic biomarkers and targets for PC.
Introduction
The mortality of pancreatic cancer (PC), the fourth widely occurred cancer with poor prognosis, has an overall fiveyear survival rate lower than 10% [1]. Due to the hidden symptoms at early stages, fewer than 15% of patients are diagnosed with PC at a stage when they could be eligible for curative surgical resection [2]. To improve early detection and prognosis and to provide timely and effective treatment for high-risk patients, predictive biomarkers for PC are required [3,4]. To date, carbohydrate antigen 199 (CA199) and CA242, which are currently used in clinical settings as serum biomarkers for PC, are inadequate for early screening and prognosis [5,6]. To uncover the unknown novel biomarker used to indicate early diagnosis and prognosis in the molecular therapeutic field of PC is extremely of importance.
Previously, the application of the immune system to recognize and eradicate tumors has made significant advance in the clinical use of cancer immunotherapy [7][8][9]. Notably, the emergence of immune checkpoints inhibitors typically interfered negative regulators of T cell immunity including LAG3 [10][11][12], CTLA-4 [13,14], PD-1 [15,16], and TIM3 [17,18]. The advent of these "checkpoint inhibitors" has thoroughly altered and improved the former therapies for melanoma, lung cancer, and so on [19]. For instance, interference of LAG3 relieved the exhaustion of T cells and heightened immunity against tumor due to the interaction among LAG3 with MHC class II and galectin 3 [20]. Additionally, tumor-infiltrating lymphocyte-(TIL-) produced TIM3 has been identified to display a key role in maintaining inactive lymphocyte status or inducing lymphocyte apoptosis [21].
LGALS9 is a ligand of TIM-3 and expressed in a variety of cell types, especially in lymphoid organs and monocytes [22]. In addition, LGALS9 could impose unequal effects on immune cells in a tumor microenvironment [22]. TGF-β signaling exhibited importance in biological signal regulation including cell growth and death, differentiation, angiogenesis, and inflammation [23]. Several recent studies demonstrated that TGF-β signaling played a key role in immune response [24]. Understanding the potential functions and expression pattern of immune "checkpoint inhibitors" could be helpful for the identification of novel prognosis and treatment biomarkers for PC.
The occurrence and progression of newly produced strategies, comprising microarray and RNA-sequencing, exerted a positive effect in molecular research and also gave impetus to exploring accurate and safe treatment for PC [25][26][27].
Here, we expanded PC-related knowledge in view of different databases, thus generating a conclusive analysis of the link between the function of immune checkpoint inhibitors and the diagnosis along with the development of PC.
Survival Analysis.
Kaplan-Meier plotter (http://www .kmplot.com/) is an online database containing microarray gene expression data, and survival information derived from Gene Expression Omnibus, TCGA, and the Cancer Biomedical informatics Grid. Kaplan-Meier (K-M) Plotter database was used to analyze prognostic parameter of expected candidates [28]. K-M survival curves and logrank test were performed to disintegrate correlation, such as gene expression with overall survival (OS) or first progression (FP) or post progression survival (PPS), respectively. Significant difference was indicated as P < 0:05.
Construction of Protein Interaction Network.
A functional protein interaction network was constructed as indicated in website (http://string-db.org/) [29]. Among them, 50 selected proteins indeed associating with Homo sapiens were selected, followed by calculating confidence score as more than 0.9.
2.3. TISIDB, GEPIA, TCGA, and CBioPortal Analysis. TISIDB is an integrated repository portal for tumorimmune system interactions. The present study used TISIDB (http://cis.hku.hk/TISIDB) database to detect the relationship between centromere protein expression and clinical stages, lymphocytes, immunomodulators, and chemokines in PC. Gene Expression Profiling Interactive Analysis (GEPIA) [30] was a powerful tool to determine key interactive and customizable functions including differential expression analysis, profiling plotting, correlation analysis, patient survival analysis, similar gene detection, and dimensionality reduction analysis, which was used to determine mRNA expression in 9,736 tumors and 8,587 normal tissues. The cBioPortal system was used to investigate cancer genomic and clinical-related characters within 105 cancer subjects in the TCGA pipeline cancer [31]. Besides, the coexpression and interaction of selected proteins were probed referred to cBioPortal guidelines [32].
Gene Ontology and Pathway Enrichment Analysis.
Gene ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis was performed using DAVID online tool. P < 0:01 was set as the cutoff criterion.
2.5. Statistical Analysis. Student's t-test was analyzed for statistical significance. Statistical analysis was performed by SPSS 21.0 (SPSS Inc., Chicago, IL).
Increasing Expression of Immunoinhibitors Was
Observed in PC Samples. The GEPIA database was used to compare the difference of expression of 9 overexpressed immunoinhibitors in transcription level between cancers and normal tissues ( Figure 1). The data demonstrated TGFB1 (Figure 2 3.3. Immunoinhibitors Were Positively Correlated to the Advanced Stage and Grade in PC. Furthermore, the TISIDB database analysis showed TGFB1 was positively correlated to the advanced grades of PC samples ( Figure 3). However, we did not observe a significant upregulation of PVRL2 (Figure 3 Interestingly, our data also revealed the correlation between Immunoinhibitors level and stages of PC samples. The results displayed that expression of VTCN1 was raised in grade 2 and grade 3 samples compared to grade 1 PC samples, but expression of VTCN1 was decreased in grade 4 sample after being normalized to that in grade 1/2 PC samples ( Figure 4(b)). Meanwhile, our data showed IL10RB was enhanced in grade 2, grade 3, and grade 4 PC samples compared to grade 1 PC samples (Figure 4(g)). However, no obvious difference between the expression of TGFB1 (Figure 4
To Assess the Coexpression and Interaction Gene with
Immunoinhibitors in PC Patients. We evaluated the association of candidate gene expression with immunoinhibitors by Pearson's correlation analysis. The immunoinhibitortarget pair with absolute Pearson's correlation coefficient value > 0:5 was considered significant. The networks were constructed using Cytoscape software. As presented in Figure 6, the coexpression network included 9 immunoinhibitors, 1250 targets, and 1304 edges. From the analysis, we observed LGALS9 may have a primary role in this network and possessed approximately 30% coexpressing targets with IL10RB, PVRL2, and IDO1. IL10 PDCD1LG2 CSF1R HAVCR2 LAG3 IDO1 CD244 BTLA PDCD1 CD96 TIGIT CTLA4 CD160 ADORA2A KDR TGFB1 PVRL2 TGFBR1 VTCN1 LGALS9 IL10RB 0 3.6. Assessment of the Function of LGALS9 and TGFB1 in PC Patients. We finally validated the role of LGALS9 and TGFB1 after the analysis of GO and KEGG in the DAVID system using their genes. After bioinformatics analyzing, LGALS9 was involved in regulating type I interferon signaling pathway, defense response to virus, interferon-gamma-mediated signaling pathway, response to virus, innate immune response, and negative regulation of viral genome replication (Figure 7(a)). KEGG pathway analysis demonstrated that LGALS9 was related to the RIG-I-like receptor signaling pathway, NF-kappa B signaling pathway, cytosolic DNAsensing pathway, and TNF signaling pathway (Figure 7(b)).
Also, we found that TGFB1 was related to mesoderm formation, cell matrix adhesion, substrate adhesion-dependent cell spreading, in utero embryonic development, extracellular matrix organization, outflow tract septum morphogenesis, mitral valve morphogenesis, and Hippo signaling (Figure 7(c)). And KEGG pathway analysis showed TGFB1
Discussion
PC with a poor prognosis was regarded as one of the most deadly carcinomas [33]. The growth and development of Cancer were reported to be involved in the process of immune suppression [34]. Cancer cells could stimulate various immune checkpoint pathways responsible for curbing immunity [35]. Monoclonal antibodies targeting immune checkpoints exhibited huge advance in cancer therapy. Currently, some researches have revealed that patients with different cancer recovered better after treatment of immunoinhibitors. Developing prospective methods based on immunoinhibitors could be of significance to explore novel biomarkers in the PC diagnosis and prognosis.
In this study, we analyzed the expression pattern of 23 immunoinhibitors in PC using TCGA database and found that IDO1, CSF1R, VTCN1, KDR, LGALS9, TGFBR1, TGFB1, Computational and Mathematical Methods in Medicine IL10RB, and PVRL2 were highly expressed in PC tissues. Moreover, the analysis revealed that IDO1, CSF1R, VTCN1, KDR, LGALS9, TGFBR1, TGFB1, IL10RB, and PVRL2 mRNA level was significantly upregulated in patients with PC compared to normal tissues. Kaplan-Meier plotter results demonstrated that the increasing level of TGFBR1, VTCN1, and LGALS9 mRNA was closely pertained to poor OS.
TGF-β was a primary executor of the stability and tolerance of the internal environment of immune system, including controlling many component functions [36]. Thus, disrupting TGF-β signal could result in inflammatory diseases and tumorigenesis. In addition, TGF-β is also a preliminary immunosuppressor in the tumor microenvironment [37]. Current researches have reported TGF-β was engaged in tumor immune evasion and adverse reactions to tumor immunotherapy [37]. Nevertheless, our study confirmed that TGFBR1 and TGFB1 are upregulated in PC samples. Kaplan-Meier analysis showed that TGFBR1 was associated with reduced OS and PFS time in PC patients.
VTCN1 exists on the surface of antigen-presenting cells (APC) and interacts with ligands that bind to T-cell surface receptors [38]. The activity of B7-H4 was illustrated to be related to the reduced inflammatory CD4 + T cell response as previously described. Studies have indicated VTCN1 expression was positively linked to tumor progression and acted as a candidate for the treatment of cancer [39]. The level of B7-H4 on tumor cells with adverse clinical and pathologic features endowed B7-H4 with clinical significance [40]. Moreover, the expression of B7-H4 in tumorassociated macrophages was correlated with Foxp3 + regulatory T cells (Tregs) [41]. Because the expression of B7-H4 was on a variety of tumor cells and tumor-related macrophages, blocking of B7-H4 could improve the tumor microenvironment, thus enabling antigen-specific clearance of tumor cells [41]. Our study suggested the enhanced level of VTCN1 was related to OS time and PFS (progression-free survival) time of PC. Nevertheless, no increasing level of VTCN1 was shown in neither PC nor normal tissues.
Transmembrane receptor TIM-3 was encoded by HAVCR2 and expressed on a variety of cells [42]. The expression of TIM-3 is closely related to exhaustion and impaired function of T cells. The interaction between TIM-3 and galectin 9 has been demonstrated to induce Th1 cell apoptosis, leading to reduced responses from autoimmunity and antitumor immunity [22] and also making TIM-3 as a potential target for ICB. Of note, our study firstly exposed the upregulated level of LGALS9 in PC patients was associated with shorter OS and PFS time.
IDO1 was responsible for converting tryptophan (Trp) into downstream catabolic product, called caninuria. LGALS3BP GSTP1 Figure 6: Construction of the coexpression network of immunoinhibitors in PC patients. Emerging studies have shown that IDO1 was expressed in a large number of human cancers. In the transcription level, IDO1 displayed powerful relevance with T cell infiltration [43]. Even though the expression of CSFR1, KDR, IL10RB, and PVRL2 was upregulated in PC samples, which was not associated with the prognosis of PC. CSFR1 was mostly found in aggressive cell models and participated in the invasion and migration of tumor cells, and its expression is related to the poor prognosis of cancer patients. Positive feedback existed between the expressions of CSF1 and EGF in tumors [44]. By blocking the signal transduction mediated by the EGF receptor or CSF-1 receptor, incomplete feedback loop would inhibit the migration and invasion of macrophages and tumor cells. Activated VEGF-VEGFR2 could accumulate Treg cells and control the migration of T lymphocytes [45]. The IL-10R signal on effector T cells and Treg cells is essential to keep immune tolerance [46]. Emerging studies have identified PVR2 as a new immune checkpoint [47].
Of note, this study revealed that LGALS9 and TGFBR1 were upregulated in PC compared to normal tissues. Moreover, we showed LGALS9 and TGFBR1 were significantly associated with the prognosis in PC. Despite the fact that LGALS9 and TGFBR1 were not significantly correlated to the grades, we indeed observed LGALS9 had an upregulated trend and TGFBR1 had a downregulated trend. We thought the limited sample size may contribute to this result. Also, the coexpression plus bioinformatics analysis revealed that immunoinhibitors were involved in regulating multiple inflammatory and immune response-related pathways as previously described. Very interestingly, we found LGALS9 was involved in regulating type I interferon signaling pathway, interferon-gamma-mediated signaling pathway, RIG-I-like receptor signaling pathway, NF-kappa B signaling pathway, cytosolic DNA-sensing pathway, and TNF signaling pathway. We also found that TGFB1 was related to mesoderm formation, cell matrix adhesion, TGF-beta signaling pathway, and Hippo signaling pathway. These pathways had been demonstrated as key regulators of tumorigenesis and immune therapy.
Several limitations should also be noted in this study. First, we showed LGALS9 and TGFB1 had a crucial role in PC with a series of bioinformatics analysis. The further experimental validations of their functions in PC could strengthen our conclusion. Second, more clinical samples should be collected to detect the expression of these immunoinhibitors in PC, which could provide more evidences to confirm their prognostic value.
Conclusion
Conclusively, our data suggested that immunoinhibitor mRNA level was dramatically upregulated, but negatively correlated with OS for PC. All the data suggested these genes could be used as an emerging prognostic indicator and targets in PC patients. Our findings would give a hint to have a better understanding of the mechanism implicated in PC and stretched out more precise immunotherapeutic treatments for PC prognosis. Nevertheless, more researches and efforts should be contributed to our findings, followed by providing a much more promising clinical strategy for an early diagnosis and prognostic marker in PC therapy.
Data Availability
The datasets used during the present study are available from the corresponding author upon reasonable request. | 2020-10-14T05:06:54.651Z | 2020-09-30T00:00:00.000 | {
"year": 2020,
"sha1": "43e51aa64b325416ea6fa046ac3252c3aa1aee2e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2020/6138039",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "43e51aa64b325416ea6fa046ac3252c3aa1aee2e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine",
"Biology"
]
} |
119612515 | pes2o/s2orc | v3-fos-license | Note on the holonomy groups of pseudo-Riemannian manifolds
For an arbitrary subalgebra $\mathfrak{h}\subset\mathfrak{so}(r,s)$, a polynomial pseudo-Riemannian metric of signature $(r+2,s+2)$ is constructed, the holonomy algebra of this metric contains $\mathfrak{h}$ as a subalgebra. This result shows the essential distinction of the holonomy algebras of pseudo-Riemannian manifolds of index bigger or equal to 2 from the holonomy algebras of Riemannian and Lorentzian manifolds.
Introduction
The holonomy group of a linear connection on an n-dimensional manifold is contained in the Lie group GL(n, R) and it represents an important invariant of the connection. Hano and Ozeki [6] showed that any connected linear Lie group G ⊂ GL(n, R) may be realized as the holonomy group of a space with a linear connection. This connection, as a rule, are of non-zero torsion. On the contrary, the absence of the torsion imposes some algebraic condition on the holonomy group. Berger used this condition in order to get the classification of the connected holonomy groups of Riemannian manifolds and of connected irreducible holonomy groups of the spaces with torsion-free linear connections [3]. Later the Berger lists were corrected. The classification of the connected irreducible holonomy groups of torsion-free linear connections is obtained in [11]. The most important result turned out to be the classification of the connected holonomy groups of Riemannian manifolds, this result has many applications in geometry and theoretical physics [1,2,8,7]. Lately the interest to the pseudo-Riemannian manifolds appears. Recently the classification of the connected holonomy groups of Lorentzian manifolds was obtained [4,10]. Note that the study of the connected holonomy groups is equivalent to the study of the holonomy algebras, i.e. the corresponding Lie algebras. Consider a Lorentzian manifold of dimension n + 2 ≥ 4. Its holonomy algebra is contained in the Lorentzian Lie algebra so(1, n + 1). It is enough to consider the holonomy algebras contained in the maximal algebra preserving an isotropic line in the Minkowski space, g ⊂ so(1, n+1) ℓ = (R⊕so(n))⋉R n . With the algebra g its projection h to so(n) is associated. The key fact is that h ⊂ so(n) is the holonomy algebra of a Riemannian manifold.
Note that the maximal subalgebra of the pseudo-orthogonal Lie algebra so(r+2, s+2), r+s ≥ 2, preserving a two-dimensional isotropic subspace of the pseudo-Euclidean space R r+2,s+2 has the form (gl(2, R) ⊕ so(r, s)) ⋉ (R 2 ⊗ R r,s ⋉ R). In this note, for any subalgebra h ⊂ so(r, s) a polynomial pseudo-Riemannian metric of signature (r + 2, s + 2) is constructed, the holonomy algebra of this metric equals h ⋉ (R 2 ⊗ R r,s ⋉ R). It turns out that the holonomy algebra of a pseudo-Riemannian manifold of signature (r + 2, s + 2), i.e. of index bigger or equal to 2, can depend on an arbitrary subalgebra h ⊂ so(r, s). This indicates the non-visibility of the holonomy algebras of pseudo-Riemannian manifolds of index bigger or equal to 2. In particular, the holonomy algebra of a pseudo-Riemannian manifold of signature (2, n + 2), n ≥ 2, may depend on an arbitrary subalgebra h ⊂ so(n); this shows the fundamental difference from the case of Lorentzian manifolds. In some sense, the obtained result is analogous to the result from [6]. Other results on the holonomy groups of pseudo-Riemannian manifolds can be found in the review [5].
For an arbitrary subalgebra h ⊂ so(r, s) we define the subalgebra of the Lie algebra so(2, n + 2) <p 1 ,p 2 > . We denote an element of the Lie algebra g h by (A, X, Y, c). The non-zero Lie brackets are the following: We get the decomposition Theorem 1 For any subalgebra h ⊂ so(r, s), the algebra g h is the holonomy algebra of a pseudo-Riemannian manifold of signature (r + 2, s + 2).
Proof. For an arbitrary subalgebra h ⊂ so(r, s) we construct a pseudo-Riemannian metric on the space R r+s+4 and show that the holonomy algebra of this metric at the point 0 coincides with g h .
Construction of the metric. We fix elements B 1 , ..., B N ∈ h, generating the Lie algebra h. Consider the matrices (B i jα ) n i,j=1 of these elements with respect to the basis e 1 , ..., e n of the space R r,s . The condition B α ∈ h ⊂ so(r, s) implies Let v, z, x 1 , ..., x n , u, w be the standard coordinates on M = R r+s+4 . Consider the pseudo-Riemannian metric The metric g is of signature (r + 2, s + 2). Let hol be the holonomy algebra of this metric at the point 0. Consider the basis of the tangent space T 0 M. We get η = g 0 , this allows us to identify(T 0 M, g 0 ) with (R r+2,s+2 , η) and the holonomy algebra hol with a subalgebra of so(r + 2, s + 2).
Computation of the holonomy algebra. The constructed metric is analytical. From the proof of theorem 9.2 from [9] it follows that hol is generated by the elements of the form where ∇ is the Levi-Civita connection defined by the metric g and R is the curvature tensor. The indices a, b, c will run through all coordinates on M, the indices i, j, k will take the values 1, ..., n. The Levi-Civita connection is determined by its Christoffel symbols Γ a bc , ∇ ∂ b ∂ c = a Γ a bc ∂ a , which can be found using the formula where (g ab ) is the inverse matrix to the matrix (g ab ) of the metric g. The components of the curvature tensor are defined by the equality R(∂ a , ∂ b )∂ c = d R d cab ∂ d and can be found in the following way: For the matrix (g ab ) we get In order to find the holonomy algebra we will need only the following Christoffel symbols: the following components of the curvature tensor: and the following components of the curvature tensor at the point 0: The computation of these values are direct. Note that there exists the following recurrent formula: where Γ aα denote the operator with the matrix (Γ a baα ). Proof of the inclusion hol ⊂ g h . The equality Γ a vb = Γ a zb = 0 means that the vector fields ∂ v , ∂ z are parallel, i.e. ∇∂ v = ∇∂ z = 0. According to the holonomy principle, hol annihilates the vectors p 1 = (∂ v ) 0 , p 2 = (∂ z ) 0 , this implies the inclusion hol ⊂ g so(r,s) . It remains to prove that pr so(r,s) hol ⊂ h, i.e.
Since the Lie algebra h is generated by the elements B 1 , ..., B N , the last equality implies the inclusion g h ⊂ hol. The theorem is true.
Correlation with the case of Lorentzian manifolds
Let us compare the obtained result with the classification of the holonomy algebras of Lorentzian manifolds [4,10]. The holonomy algebra of an (n + 2)-dimensional Lorentzian manifold is a subalgebra of the Lorentzian Lie algebra so(1, n + 1), n ≥ 0. Consider a basis p, e 1 , ..., e n , q of the Minkowski space R 1,n+1 such that the Minkowski metric η has only the following non-zero values: η(p, q) = η(e i , e i ) = 1. The subalgebra so(1, n + 1) Rp ⊂ so(1, n + 1) preserving the isotropic line Rp has the form It is enough to consider the subalgebras g ⊂ so(1, n + 1) contained in so(1, n + 1) Rp . For an arbitrary subalgebra h ⊂ so(n) consider the subalgebra Leistner [10] proved the following non-trivial statement: if g h is the holonomy algebra of a Lorentzian manifold, then the subalgebra h ⊂ so(n) must be the holonomy algebra of a Riemannian manifold. The proof is based on the fact that the holonomy algebra g ⊂ so(r, s) of an arbitrary pseudo-Riemannian manifold of signature (r, s) is generated by the images of algebraic curvature tensors, these tensors belong to the space R(g) consisting of the 2-forms on R r,s with the values in g and satisfying the first Bianchi identity For R ∈ R(g h ), the projection pr h • R is of the form where P : R n → h is a linear map satisfying the identity η(P (X)Y, Z) + η(P (Y )Z, X) + η(P (Z)X, Y ) = 0, X, Y, Z ∈ R n .
Let P(h) be the space of such maps P . We get that h must be generated by the images of the elements of the spaces R(h) and P(h). Leistner showed that from this condition it follows that h must be generated by the elements of the space R(h); this means that h ⊂ so(n) is the holonomy algebra of a Riemannian manifold.
In the construction of the metric from the last section we used this property as well as the just described algebraic curvature tensors. | 2019-04-12T09:14:41.749Z | 2004-06-21T00:00:00.000 | {
"year": 2004,
"sha1": "025c63bf2b1df0b1b429288fc098fdb7041af72b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/math/0406397",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "025c63bf2b1df0b1b429288fc098fdb7041af72b",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
245134603 | pes2o/s2orc | v3-fos-license | Advances in designing Adeno-associated viral vectors for development of anti-HBV gene therapeutics
Despite the five decades having passed since discovery of the hepatitis B virus (HBV), together with development of an effective anti-HBV vaccine, infection with the virus remains a serious public health problem and results in nearly 900,000 annual deaths worldwide. Current therapies do not eliminate the virus and viral replication typically reactivates after treatment withdrawal. Hence, current endeavours are aimed at developing novel therapies to achieve a functional cure. Nucleic acid-based therapeutic approaches are promising, with several candidates showing excellent potencies in preclinical and early stages of clinical development. However, this class of therapeutics is yet to become part of standard anti-HBV treatment regimens. Obstacles delaying development of gene-based therapies include lack of clinically relevant delivery methods and a paucity of good animal models for preclinical characterisation. Recent studies have demonstrated safety and efficiency of Adeno-associated viral vectors (AAVs) in gene therapy. However, AAVs do have flaws and this has prompted research aimed at improving design of novel and artificially synthesised AAVs. Main goals are to improve liver transduction efficiencies and avoiding immune clearance. Application of AAVs to model HBV replication in vivo is also useful for characterising anti-HBV gene therapeutics. This review summarises recent advances in AAV engineering and their contributions to progress with anti-HBV gene therapy development.
Background
Hepatitis B virus (HBV) infection is a major public health burden. Approximately 257 million individuals worldwide are chronically infected with the virus and therefore predisposed to cirrhosis, hepatocellular carcinoma (HCC) and liver failure [1]. HBV has a partially doublestranded relaxed circular DNA (rcDNA) genome of approximately 3.2 kilobases (kb) in length. The genome has four overlapping open reading frames (ORFs), namely the polymerase (P), core (C), surface (S) and the X ORFs. Expression of HBV genes is controlled by four separate promoters: the basic core, preS1, preS2 and X promoters. The cis enhancer elements are enhancer I, located upstream of the X promoter, and enhancer II which is located upstream of the basic core promoter. These regulatory elements are responsible for liver-specific viral gene expression. The P ORF encodes a DNA polymerase with priming, reverse transcriptase and RNase H activity. The C ORF comprises precore and core regions, collectively termed precore/core. The core region encodes HBV core antigen (HBcAg) which forms the viral capsid, whereas the precore region encodes the HBV e antigen (HBeAg), an immune suppressor and an indicator of active viral replication. The S ORF has three initiation codons: preS1, preS2 and S, which respectively initiate translation of large, middle, and small surface proteins.
The X ORF encodes the regulatory HBx protein, which is essential for viral replication [2]. HBV infection is initiated by low affinity interaction of the infectious Dane particle with glycosaminoglycans located on the hepatocyte surface [3]. Enhanced by the presence of epidermal growth factor, the high affinity binding of myristalyted large surface antigen to the sodium taurocholate co-transporting polypeptide (NTCP) receptor facilitates entry of the nucleocapsid. Mediated by endocytosis, the nucleocapsid is then transferred to the nucleus via the microtubules [4][5][6][7]. This is followed by nuclear release of rcDNA, which is then repaired to form covalently closed circular DNA (cccDNA). The cccDNA then serves as the template for transcription of pregenomic RNA (pgRNA) and viral protein-encoding mRNAs (reviewed in [8]). HBx binds the DDB domain of ubquitin ligase 1 to render structural maintenance of chromosomes protein 5/6 unstable and thereby facilitates HBV gene expression [9][10][11]. Translation of the precore/core RNA produces HBcAg. Encapsidation of the pgRNA is followed by its reverse transcription. The mature nucleocapsid is then transported to the nucleus to maintain cccDNA pools or acquires a surface antigen-containing envelope to form intact virions (Dane particles) following secretion via the endoplasmic reticulum.
Current treatment of HBV infection requires longterm therapy and reduces severe complications and death, but rarely eliminates the virus. This is as a result of inability to clear the stable cccDNA episome from infected hepatocytes and antigenemia-mediated exhaustion of HBV-specific CD8 + T cells and B cells [12,13]. Hence developing a cure for HBV infection is a priority. Recently, gene-based and combinatorial strategies targeting multiple steps in the HBV replication cycle have shown promise. The potential of gene-based strategies for eliminating cccDNA and reducing antigenemia has been demonstrated in preclinical studies (reviewed in [14]). These gene-based approaches include gene editing and gene silencing. Strategies have applied technology based on clustered regulatory interspaced short palindromic repeats (CRISPR)/CRISPR associated (Cas) systems, transcription activator-like effector nucleases (TALENS) and RNA interference (RNAi) to inhibit HBV gene expression (reviewed in [15]). The current challenges of anti-HBV gene therapeutics include difficulties with obtaining a clinically relevant vector for hepatic transgene delivery and the paucity of suitable animal models that simulate natural HBV infection.
Recent US Food and Drug Administration (FDA) approval of AAV-based therapies Zolgesma and Luxturna reinforces a well-established biosafety profile and efficacy of AAVs for human application [16][17][18]. AAV mediation of hepatic transgene expression is also now well established [19][20][21]. Moreover, HBV infection enhances AAV transduction of hepatocytes [22]. Despite these appealing features, use of AAVs has not been without its challenges. These include low packaging capacity, reduced transduction efficiencies in specific tissues, induction of CD8 + T cell responses and clearance by pre-existing immunity. This review focuses on recent progress with modifying AAVs and their contribution to advancing anti-HBV gene therapy.
Biology of Adeno-associated viruses
AAVs are small viruses that cannot replicate on their own, but depend on co-infection with other viruses such as adenoviruses or herpes simplex virus or vaccina virus or human papilloma virus (reviewed in [23]). Twelve AAV serotypes have been identified to date. The nonenveloped viral capsid, comprising VP1, VP2 and VP3, has conserved eight-stranded β-barrel motifs, an α-helix and nine variable regions that confer AAV tropism diversity [24]. Multiple viral surface sites have been mapped and characterised as T-cell epitopes, immunogenic motifs and monoclonal antibody docking sites [25][26][27].
The AAV capsid encases the genome that comprises In the presence of a helper virus AAVs express their genes from a trio of promoters (p40, p5 and p15) and becomes lytic (reviewed in [23]). Transcription from all the three promoters is terminated by a common poly-adenylation signal. Expression from the p40 promoter produces the three capsid proteins (VP1, VP2 and VP3) and an assembly activating protein (AAP). The capsid proteins assemble in 1:1:10 ratio of VP1:VP2:VP3 to form an icosahedral capsid while AAP mediates capsid assembly. Rep gene expression is driven from p5 & p19 promoters to produce two large (Rep78 and Rep68) and two small (Rep52 and Rep40) Rep proteins (reviewed in [28,29]). AAV2 infection is initiated by binding to the heparin sulphate proteoglycan receptor and a co-receptor fibroblast growth factor receptor. Endocytosis through clathrin-coated vesicles then mediates viral entry [30,31]. In the absence of a helper virus, AAV gene expression is limited and the genome mainly persists episomally. Less frequently, while various integration sites exist for AAV such as AAVS2 or AAVS3, AAV DNA integrates preferentially into the AAVS1 site of the host genome [32,33]. In the presence of a helper virus, expression from the Rep ORF is activated and enables AAV genome rescue. DNA polymerase mediates second strand synthesis using one ITR as a primer to produce a double stranded DNA (dsDNA). Together with Rep78/68 and several host factors, DNA polymerase uses the dsDNA as a template for re-initiation and polymerisation from one end to generate a double stranded full-length genome and displace a single stranded full-length genome. The double stranded genome serves as a template for further rounds of replication, while the rep proteins mediate ssDNA loading into the capsid. An active release pathway of AAV particles following viral assembly remains to be described (reviewed in [34,35]).
Engineering AAV genome for delivery of anti-HBV sequences
For AAV vector production, the ITR sequences are retained but the promoter sequences, rep and cap genes are removed to accommodate transgene cassettes (Fig. 1A). This generally allows insertion of a maximum of about 5 kb sequences into the vectors. Although this is enough for smaller anti-HBV effectors, such as RNAi activators, it precludes delivery of larger anti-HBV CRISPR/Cas and TALEN sequences. Early studies described successful production of oversized AAV vectors and packaging of genomes larger than 5 kb (Fig. 1B) [36][37][38][39][40]. However, during viral production AAVs with heterogeneous genome sizes with truncations are often produced from these oversized genomes. Upon transduction, large AAV genomes may be reconstituted by concatemerisation, but reconstitution using the truncated genomes is highly inefficient and not desirable for clinical application.
Based on ability of AAV genomes to concatemerise and serve as substrates for homologous recombination, another strategy to increase tansgene capacity entails use of dual AAV vectors [40,41]. The most commonly used are dual overlapping vectors, dual trans-splicing vectors and dual hybrid vectors ( Fig. 1C-E). The design is to split the expression cassette into two parts, each contained in an AAV, and the intact transgene is reconstituted in a cell after transduction by homologous recombination or concatamerisation. As with oversized vectors, reconstitution of dual vectors is inefficient. This results in poor transduction efficacies and a requirement for high vector doses to achieve therapeutically relevant effects [37,[42][43][44].
Recent studies have taken advantage of CRISPR/ Cas systems being made up of two components, i.e. the nuclease and the single guide RNA (sgRNA) (Fig. 1F). These may be expressed on separate AAVs and in combination effect DNA cleavage upon transduction of a cell by the two vectors [45]. With recent availability of smaller nucleases, a single AAV can now be used to deliver both the nuclease-and sgRNA-encoding sequences to effect cccDNA cleavage [46][47][48][49]. TALEN activity requires two subunits, each encoded by DNA of at least 3 kb, to effect dsDNA cleavage. Although the evidence is scant, two component vector systems should be applicable to delivering sequences that together constitute complete anti-HBV TALENS.
A requirement to convert an ssDNA AAV genome to a dsDNA before transgene expression is a limiting step of AAV-based gene transfer. For quicker transgene expression, the trs site may be mutated to inhibit terminal resolution. This results in AAVs bearing a long hairpin loop molecule with complementary duplex strands referred to as self-complementary AAVs (scAAVs, Fig. 1G). Although this reduces packaging capacity by half, scAAVs bypass the requirement for second strand synthesis with consequent faster and higher transgene expression [35,50].
AAVs are known to infect a diverse range of tissues, which might lead to undesirable off-target transgene expression [51]. Hepatic tissue-specific expression of anti-HBV gene therapies can be achieved by placing transgene expression under control of liver-specific promoters, such as Transthyterin (TTR) or mouse Transthyretin (mTTR) (Fig. 1) [19,20]. Liver-specific regulatory elements derived from core domains of human apolipoprotein hepatic control region, human α-1-antitrypsin and hybrid liver promoters successfully drive factor IX expression in the liver [52,53] and may be applicable to anti-HBV gene therapy. When in silico identified evolutionary conserved hepatocyte-specific cis-regulatory modules (CRMs) were incorporated into scAAVs, up to 100-fold higher transgene expression was achieved when compared to scAAVs cassettes containing the TTR promoter ( Fig. 1) [54].
AAV capsid engineering for improved transduction and evasion of pre-existing immunity
The requirement for a high AAV dose to achieve therapeutic effects in non-human primates has been reported to result in death [55]. Hence, production of AAV capsids that achieve high transduction efficiencies at low dose is an important goal of the field. AAV capsid structural properties determine vector tropism, immune detection and transduction efficiency. Hence, manipulation of capsid architecture is central to enhancing the vectors' therapeutic efficacy. High prevalence of pre-existing AAV-specific antibodies in humans, which limits AAV-mediated gene transfer, is another major reason for investigating utility of AAV capsid modification [56,57]. In addition, proteasomal degradation, breakdown of capsids following endosomal escape and MHC1 presentation of AAV peptides with cell-mediated elimination of infected hepatocytes result in poor transgene expression [58][59][60][61]. Approaches have mainly involved rational design or directed evolution to modify AAV capsids. The former relies on prior knowledge of capsid architecture and intracellular trafficking of AAVs. By contrast, directed evolution utilises stringent selection methods to concentrate and confer advantageous and beneficial traits on a vector.
Rational designs of capsids
Several AAV variants with desirable features have been developed by using different rational design strategies. Some of the AAV strategies discussed below were used to develop variants for transduction of non-liverderived cells. However, these approaches can be applied to improve efficiency of liver-targeting vectors. Docking sites of monoclonal antibodies (mAbs) or capsid antigenic motifs (CAMs) located in capsid variable regions (VR) serve as targets of capsid modification to avoid neutralising antibody (NAb) recognition. When these CAMs were mutated to produce libraries of novel AAV capsid variants (AAV-CAMs) followed by iterative rounds of selection in endothelial cells, antigenically advanced capsids were identified [62,63] (Fig. 2).
Using a method called barcoded rational AAV vector evolution (BRAVE), rational design and screening for AAV variants that ably transduce cells of the central nervous system (CNS) were identified. To build an AAV library, proteins with synaptic affinity were identified using bioinformatics, and their peptide fragments inserted at a specific position to produce mutant capsids. AAV genomes bearing unique nucleotide barcode sequences were packaged into the mutant capsids to enable identification of individual capsid structures [64,65]. Another method designated Cre-recombinationbased AAV targeted evolution (CREATE) was used to generate vectors capable of transducing the CNS. The 7-mer random PCR-generated fragments were inserted between sequences encoding amino acid residues 588 and 589 of the capsid gene. The downstream poly A signal was flanked by lox P sites. AAV library administration in a cell type specific cre transgenic mice allowed inversion of the poly A signal, creating a sequence that could be amplified using pre-designed PCR primers. This led to the isolation of capsids that were capable of infecting creexpressing cells (Fig. 2) [66].
Phosphorylation of AAV tyrosine or lysine residues by host cellular machinery leads to AAV capsid degradation by the ubiquitin-proteasome pathway [61,67]. Other post-translational modifications such as glycosylation, SUMOylation, and neddylation also impact on viral transduction. Glycosylation facilitates viral cell entry, trafficking to the nucleus, virulence and immune evasion (reviewed in [68]). As with ubiquitination, neddylation and SUMOylation form reversible covalent attachments at lysine residues. These modifications affect protein stability, subcellular localisation, structure and function to inhibit AAV transduction of cells [69,70]. Mutation or chemical modification of lysine residues in AAV2 or AAV8 capsids where glycosylation, neddylation or SUMOylation occurs resulted in higher transgene expression and decreased interaction of the AAV with NAbs ( Fig. 2) [71][72][73][74].
Directed evolution designs of novel capsids
DNA shuffling of capsid-encoding sequences from multiple AAV serotypes has also been used to generate libraries. This approach has been used to identify capsids with improved hepatocyte transduction capabilities (Fig. 3) [75][76][77]. Libraries of AAVs may also be generated by random capsid sequence mutagenesis. Stringent selection of these mutant libraries in vitro and in chimeric murine livers identified variants with improved transduction efficiencies (Fig. 3) [78,79]. Another strategy employed phylogenetic techniques to predict ancestral AAV capsid sequences that mediated higher transgene expression than natural AAV serotypes [80] (Fig. 3). Studies carried out in vivo on small and large animals identified another antigenically distinct and antibody-evading ancestral AAV vector that efficiently transduced a variety of cells including the liver [81][82][83][84][85][86].
How has anti-HBV gene therapy designs benefited from the advances in AAV vector developments? Application of improved AAVs to deliver anti-HBV gene therapeutics
The higher liver transgene expression from AAV8pseudotyped scAAVs enables use of low vector doses. Also, the dsDNA nature of the scAAV genome makes it more stable [52,[87][88][89]. These properties are favourable for targeting chronic viral infections such as are caused by HBV. The feasibility of delivering anti-HBV RNAi activators using scAAVs is well documented [90,91]. In mice, following a single dose of AAV8-pseudotyped scAAVs a reduction of HBV replication markers was observed over 10 months [19]. Targeting both HBV and the host factors that mediate fibrosis with scAAVs improves therapeutic efficacy [92]. Supporting this combinatorial approach is the observation that scAAVs used to co-deliver an RNAi effector against HBV and Argonaut2, the rate limiting host factor in the RNAi pathway or a RNAi activator sense strand targeted decoy, improves safety, specificity, and efficacy [20,93,94].
Because of the limiting packaging capacity of scAAVs, several studies have used ssAAVs for delivery of a smaller Staphylococcus aureus Cas9 (SaCas9) with single or combination of guides targeting several coordinates in the cccDNA. These studies showed significant decline in markers of HBV replication in cultures and in nice [46][47][48]. A recent study has illustrated that using ssAAVs engineered to express saCas9 from a chimeric Fig. 2 Rational strategies of AAV capsid modification. Monoclonal antibody docking sites, capsid antigenic motifs (CAM) and lysine residues associated with ubiquitination, neddylation, SUMOylation or glycosylation in the capsid are mutated by site directed mutagenesis. Mutated capsid sequences are cloned in to a Rep-encoding plasmid to create a plasmid library that is then used to package a reporter encoding AAV genome. The AAV library is then put through several rounds of selection in vivo or in cell culture to enrich for AAV variants with desirable traits such as hepatocyte transduction and immune evasion. Chemical modification with compounds with hepatocyte affinity e.g. GalNAc, are attached to the AAV capsid to generate AAVs with enhanced hepatocyte transduction. Barcoded rational AAV vector evolution (BRAVE) involves bioinformatics identification of proteins with hepatocyte affinity. A DNA library encoding peptide fragments is generated and inserted in the specific positions of the capsid sequence within the Rep-encoding plasmid to produce a mutant capsid plasmid library. This is followed by packaging of the reporter encoding AAV genome bearing a unique nucleotide barcode sequence. Cre-recombination-based AAV targeted evolution (CREATE) involves insertion of random PCR generated fragments between specific capsid gene positions within the Rep-encoding plasmid to produce mutant capsid plasmid library. A reporter-encoding AAV genome with the poly A signal flanked by lox P sites is then packaged. The AAV library is selected in hepatocytes specific Cre expressing mice liver-specific promoter resulted in preferential liver expression and superior suppression of HBV replication in mice [95]. One study that used scAAVs to deliver anti-HBV gene editors co-transduced cells with two scAAVs, each carrying one ZFN monomer against HBV Pol, C or X genes. scAAVs against Pol resulted in near undetectable levels of HBV replication makers [96].
The anti-HBV gene therapy field has not yet fully capitalised on the availability of modified or synthetic AAV capsids described above. However, these developments, especially the genetically modified and synthetic AAV capsids that shows high liver transduction efficiencies, are alluring and promise to bring positive outcomes to HBV treatment.
Application of AAVs to model chronic HBV infection
The plethora of AAV serotypes, either extant or artificially synthesized and recent discovery of various receptors and co-receptors that facilitate AAV binding, are valuable tools to model HBV replication [97,98]. These models are key to evaluating novel anti-HBV therapeutic interventions before clinical translation. Despite impressive progress with understanding the molecular biology of HBV, an easily accessible model that can recapitulate all stages of HBV remains elusive. Although chimpanzees are susceptible to HBV infection, and their immune responses are similar to those observed in humans, high cost and ethical concerns limit use of these animals in research (reviewed in [99]). Models using species-specific hepatitis strains such as duck hepatitis B virus and woodchuck hepatitis virus are limited by infection mechanisms and disease manifestations in these models that differ from natural HBV infection (Table 1) [100,101].
Mouse models to simulate HBV replication remain the most accessible and commonly used. Chimeric mice with livers engrafted with human hepatocytes are valuable, but use of these animals is limited by their extreme immuno-deficiency and difficulties with maintaining hepatocyte function over long periods of time [102][103][104][105]. Transgenic mice with integrated DNA comprising greater-than-genome-length HBV sequences mimic chronic HBV replication [106][107][108]. However, transgenic mice show inter-individual variability, variation of HBV Fig. 3 Directed evolution strategies of AAV capsid modification. DNA shuffling of various serotypes' cap sequences is achieved by fragmenting the capsid sequences and assembly to create hybrid capsid sequences, which can be cloned in the a Rep-encoding plasmid to create an AAV capsid plasmid library. The plasmid is then used to package a reporter-expressing AAV genome followed by selection in vitro or in vivo. Capsid sequence random mutagenesis can be performed to create a mutant capsid library, which is cloned in to a Rep-encoding plasmid to create an AAV library that can be selected. Ancestral capsid sequences, predicted using bioinformatic tools, can be synthesized in vitro and used to produce an ancestral capsid plasmid library and an AAV library before selection replication markers over time and high markers of HBV replication that often exceed those of natural HBV infection (Table 1) [19,109].
Use of recombinant adenoviruses and AAVs to deliver greater-than-genome-length HBV sequences has also been explored to simulate HBV replication in vivo. Adenoviral vectors efficiently induce HBV replication in mice [110][111][112]. However, transduction results in a strong immune response to the adenoviral vectors and early clearance of transduced hepatocytes. Three studies using an AAV8-carrying replication-competent greater-thangenome-length HBV genotype D (AAV-HBV model) demonstrated the potential of these vectors for simulation of chronic HBV infection [113][114][115]. HBsAg, HBeAg and HBcAg expression, accompanied by hepatitis B virion production over a period of up to 16 months, were observed. Production of anti-HBcAg antibodies, but not anti-HBsAg and HBeAg antibodies which is a phenomenon observed in HBV chronic carriers, was also demonstrated. Interestingly, between 12 and 16 months, mice developed features of HCC and elevated makers of liver injury. Liver fibrosis, chronic liver injury and minimal or no acute inflammatory responses were observed in these mice. Given that T cell exhaustion is a well-documented feature of chronic HBV infection, the lack of a significant immune response in these mice is perhaps not surprising (Table 1) [116,117]. Although the mechanism of HCC, fibrosis and chronic liver damage in AAV-HBV murine models remains to be established, expression of HBV antigens such as HBx and HBsAg, together with AAVmediated HBV DNA integration may be the contributing factors. Interestingly, recent studies showed that the AAV-HBV model based on either HBV serotype A, B, C or D may result in formation of cccDNA in murine hepatocytes [118,119]. Although the replication intermediate was lost over time in transduced livers, the sequence and functionality was not distinguishable from cccDNA derived from natural HBV infection. The mechanism of its formation is not clear, however the cccDNA is HBV replication-independent and originates from intramolecular recombination of the HBV genome ends [119].
Another model recently established in non-human primates used AAV8 vectors expressing human NTCP (AAV8-hNTCP) [120,121]. The study also employed helper-dependent adenoviral vectors (HDAds) to deliver sequences encoding hNTCP. Rhesus macaques, naturally not infectable by HBV, were injected with hNTCP-expressing vectors and then infected with HBV. HBV gene expression and HBV replication intermediates were detected over a period of 42 weeks. Moreover, markers of liver injury and T cell responses to HBV antigens were reported. Importantly the essential replication intermediate comprising cccDNA could also be detected in these macaques. Although a higher AAV vector dose was required and HDAd vectors promoted superior hNTCP expression and HBV infection, adenoviral vectors are limited by their high immune stimulation and the resultant short-term transgene expression. Hence, administration of immunosuppressants before injecting HDAds was required to prolong HBV gene expression. This makes HDAd-based models less favourable for mimicking chronic HBV infection.
Conclusion
Studies described here show that progress with the design of improved AAV vectors will assist with addressing challenges facing development of anti-HBV gene therapy. Improved liver transduction will make it possible to administer lower vector doses to achieve clinically relevant therapeutic outcomes. Engineering AAVs to produce vectors that can evade systemic and cellular trafficking hurdles to deliver anti-HBV payloads to target cells expands the toolbox of gene therapy for the viral infection. Harnessing AAVs' liver tropism to model HBV replication also shows potential. Although [19,122] AAV-HBV mouse model Immuno-competent Yes Replication No Yes Yes [113][114][115] hNTCP expressing Rhesus macaques Immuno-competent Yes Infection Yes -Yes [121,123] murine models remain the most accessible and simple, demonstration that AAVs may be used to make non-human primates susceptible to HBV infection is significant. Collectively these developments will facilitate clinical translation of AAV-based, as well as other potentially curative therapies, to eliminate chronic HBV infection. | 2021-12-14T22:04:38.303Z | 2021-12-01T00:00:00.000 | {
"year": 2021,
"sha1": "8163363885d972251e952c50c461e6df4805caef",
"oa_license": "CCBY",
"oa_url": "https://virologyj.biomedcentral.com/track/pdf/10.1186/s12985-021-01715-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2b1691059e88c159ecb4e617578a5e7ad3e3fda5",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1966159 | pes2o/s2orc | v3-fos-license | Disseminated BCG Infection in a patient with Severe Combined Immunodeficiency
Disseminated mycobacterial infection after bacillus Calmette-Guerin (BCG) vaccination is a very rare disorder, occurring mostly in patients with immunologic deficiency. We report a case of disseminated BCG infection in a 16-month-old girl with severe combined immunodeficiency. Plain radiographs showed multiple osteolytic lesions in the femora, tibiae, humerus, and phalanges. Abdominal sonography and CT scanning revealed multiple nodules in the spleen, and portocaval lymphadenopathy.
mmunization of children with bacillus Calmette-Guerin (BCG), a live attenuated bacterial vaccine derived from Mycobacterium bovis, is recommended by the World Health Organization in communities with a high prevalence of tuberculosis. BCG vaccines are extremely safe in immunocompetent hosts, but possible complications range from local inflammatory reactions (lymphadenitis, abscess, fistula formation) to disseminated diseases (osteomyelitis, bacteremia, meningitis) and death (1). Significant local reactions occur in 0.1-1% of vaccine recipients, whereas serious disseminated BCG infection occurs very rarely, in fewer than one in a million cases (2). The disseminated form of tuberculosis, which has a high mortality rate, develops mostly in patients with immunologic deficiency (3). Radiological reports of disseminated BCG infection have focused mainly on osteomyelitis (3 5). We described the radiological manifestations of disseminated BCG infection in an infant with severe combined immunodeficiency.
At the age of 10 months she was readmitted due to left axillary and cervical lymphadenopathy; several courses of antibiotic treatment had failed to change the course of her illness. There was no family history of tuberculosis, but she had been vaccinated with BCG at the age of one month. A specimen for culture was obtained from the axillary lymph node, but bacteriological culture was negative. Numerous acid-fast bacilli were demonstrated on smears, and cultures revealed growth of Mycobacterium bovis, BCG strain. She was treated with isoniazid (15 mg/kg per day), rifampicin (15 mg/kg per day), and pyrazinamide (20 mg/kg per day). In spite of antituberculous therapy, her lymphadenopathy worsened. At the age of 16 months, she was readmitted with persistent lymphadenopathy in the left axilla and cervical area, and reddish swelling of the right third finger. A chest radiograph revealed no abnormality of lung parenchyma or hilar lymphadenopathy, but a round osteolytic lesion was incidentally detected in the right distal humerus. A plain radiograph of both lower extremities revealed multiple, round, osteolytic lesions in the femora, tibiae, and fibulas (Fig. 1A). Both tibiae showed extensive periosteal thickening along the entire diaphyses. Ultrasound examination of the abdomen showed enlarged lymph nodes containing an eccentric hypoechoic portion at the porta hepatis and hypoechoic nodules in the spleen (Figs. 1B,C). Contrast-enhanced CT scanning demonstrated the presence of multiple nodules in the spleen, and portocaval lymphadenopathy with a central low-attenuated area (Fig. 1D).
DISCUSSION
BCG is a live attenuated bacterial vaccine for infants born in countries with a high incidence of Mycobacterium tuberculosis infection, and protects children from miliary tuberculosis and tuberculous meningitis. It is considered a safe vaccine with a low incidence of complications, the overall rate of which has been estimated to be 1% or less (6). The incidence of complications, however, has varied according to the vaccine strain, dosage, and method of administration. The most frequent complications are local inflammatory reactions, including hypersensitivity, localized lymphadenitis, and local abscess or fistula formation.
Disseminated disease, the most serious complication of BCG vaccination, develops in less than one case in a million. Disseminated BCG infections have occurred following vaccination of children with immunodeficiency disorders such as severe combined immunodeficiency, chronic granulomatous disease, complete Di George syndrome, AIDS, HIV infection or idiopathic immunodeficiency of genetic origin, but only rarely in apparently normal individuals (6). Severe combined immunodeficiency, as in this case, usually presents in infancy, with severe infections, and is characterized clinically and immunologically by profound abnormalities of cell-mediated and humoral immunities. Infants with immunodeficiency contraindicate BCG vaccination, but are usually vaccinated prior to diagnosis. In other cases, immunodeficiency may be diagnosed after the development of BCG complications. Our patient was vaccinated with BCG before severe combined immunodeficiency was diagnosed. Subsequently, to prevent BCG complications, her younger brother was not vaccinated.
Disseminated BCG infection may involve the bones, joints, liver, spleen, and lymph nodes, and is usually fatal. Tuberculous osteomyelitis in infants and young children usually develops after BCG vaccination, and its frequency is assumed to be approximately one case per million vaccinations (7). Bone changes generally appear between three months and five years after vaccination. BCG osteomyelitis involves the hand, foot, long bone, spine, rib, sternum, and clavicle, the tubular bones of the fingers and toes being the most common sites of involvement (4). BCG osteomyelitis generally occurs in the metaphysis or epiphysis of the long bones or bones of the fingers and toes. In immunocompetent children, a solitary lesion with a sclerotic margin usually appears (5), whereas in children with immunodeficiency disorders there is multiple involvement. Hugosson et al. (3) reported a case of extensive disseminated skeletal osteomyelitis in a 9-month-old female infant with severe combined immunodeficiency. The bone lesions were mainly located in the metaphysis of the distal parts of the femora, proximal parts of the tibiae, both proximal humeral shafts, the ribs and skull, and no cortical thickening was observed. In our case, the involved bones were similar, except for the rib and skull, but the lesions were mainly in diaphyseal locations of the long bones, with diffuse cortical thickening. The differential diagnoses of multiple osteolytic bone lesions in infants include metastatic neuroblastoma, leukemia, osteomyelitis, and Langerhans cell histiocytosis. Metastatic neuroblastoma and leukemia may have a similar radiographic appearance, demonstrating permeative or moth-eaten destruction and periosteal new-bone formation. Pyogenic osteomyelitis shows permeative bone destruction, and mainly involves the metaphysis. Langerhans cell histiocytosis may destroy or expand the cortex, and overlying periosteal new-bone formation is usually present, though in patients with this condition, compact periosteal reaction is also a typical feature (8).
Disseminated BCG infection in patients with severe combined immunodeficiency may also involve the lung, liver, spleen, and lymph nodes. In our case, the spleen and lymph nodes at the porta hepatis were also involved. US and CT scans of enlarged lymph nodes at the porta hepatis revealed the presence of central low echogenic or low attenuating lesions, which were thought to be areas of caseation necrosis. Demonstration of low echogenic or low attenuating areas within the enlarged lymph nodes, as demonstrated by CT scanning and ultrasound, may be helpful in the diagnosis of tuberculosis.
In summary, disseminated BCG infection in our patient with severe combined immunodeficiency led not only to disseminated osteomyelitis but also lymphadenitis and visceral organ involvement. | 2016-05-04T20:20:58.661Z | 2000-06-01T00:00:00.000 | {
"year": 2000,
"sha1": "8e2716c45f839267476c8b29b90afceb368c6826",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc2718164?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "8e2716c45f839267476c8b29b90afceb368c6826",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258999765 | pes2o/s2orc | v3-fos-license | A Data-Driven Computational Model for Engineered Cardiac Microtissues
Engineered heart tissues (EHTs) present a potential solution to some of the current challenges in the treatment of heart disease; however, the development of mature, adult-like cardiac tissues remains elusive. Mechanical stimuli have been observed to improve whole-tissue function and cardiomyocyte (CM) maturation, although our ability to fully utilize these mechanisms is hampered, in part, by our incomplete understanding of the mechanobiology of EHTs. In this work, we leverage the experimental data produced by a mechanically tunable experimental setup to generate tissue-specific computational models of EHTs. Using imaging and functional data, our modeling pipeline generates models with tissue-specific ECM and myofibril structure, allowing us to estimate CM active stress. We use this experimental and modeling pipeline to study different mechanical environments, where we contrast the force output of the tissue with the computed active stress of CMs. We show that the significant differences in measured experimental forces can largely be explained by the levels of myofibril formation achieved by the CMs in the distinct mechanical environments, with active stress showing more muted variations across conditions. The presented model also enables us to dissect the relative contributions of myofibrils and extracellular matrix to tissue force output, a task difficult to address experimentally. These results highlight the importance of tissue-specific modeling to augment EHT experiments, providing deeper insights into the mechanobiology driving EHT function.
Introduction
The development of engineered heart tissues (EHTs) for use in regenerative therapies, drug testing, and disease modeling has the potential to improve the life expectancy of millions of people that suffer from cardiac disease [1]. Current state-of-the-art EHTs are manufactured using cardiomyocytes (CMs) derived from induced pluripotent stem cells (iPSCs) and scaffolds that mimic the structure and mechanics of native cardiac tissue [2]. However, iPSC-CM maturation in current EHT platforms remains a challenge [3]. This is evidenced by the lack of hallmark attributes of mature CMs, such as myofibril alignment, protein expression, calcium handling, and electrophysiological response, among others [4]. Biophysical stimuli, such as electrical pacing [5,6] and mechanical loading [7,8,9], have been shown to enhance iPSC-CM maturation in different EHT platforms. However, the underlying mechanisms driving these observations remain incompletely understood [10], preventing scientists from efficiently optimizing the application of these techniques.
Several in-vitro studies have attempted to elucidate the impact of different mechanical microenvironmental perturbations on the maturity of iPSC-CMs. For example, Leonard et al. [8] showed that increasing the resistance against which the EHTs contract increases force generation. Similarly, Bliley et al. [9] showed that EHTs grown under passive stretch evolve into more mature tissues. Furthermore, other studies have shown that culturing CMs on hydrogels with a modulus similar to that of the healthy adult myocardium enhances electro-mechanical activity [11,12]. DePalma et al. expanded upon this, showing that iPSC-CMs adapt their contractile behavior in response to ECM mechanics on synthetic, fibrous matrices that better mimic the anisotropic mechanics of the cardiac ECM [7]. This and other work by Allen et al. [13] also showed that increasing the anisotropy of the fibrous matrix results in more aligned myofibrils. These studies highlight the impact of different mechanical cues on EHT formation and function. However, the inherent variability of EHTs -from their formation, maturation, ECM properties and structure, myofibril formation, etc. -cloud our interpretation of each mechanical parameter's relevance and the underlying mechanobiological drivers.
A way of tackling this complex task and providing insight into the multifaceted differences in EHTs is through biomechanical modeling. Biomechanical models have been used to understand the mechanics of invitro systems, helping decipher the mechanobiology behind the alignment of cells [14,15], the biomechanics of microtissue failure due to necking [16], the cell force transmission through fibrous substrates [17,18], and mechanics at cell-to-cell junctions [19]. These models enable the examination of experimental conditions -as well as exploration through in silico testing whereby scenarios that would be virtually impossible to construct experimentally can be studied. Biomechanical models -particularly at the whole-organ level -have further enabled the integration and assimilation of imaging and patient data [20,21,22,23], providing pipelines for generating patient-specific models describing cardiac function. These pipelines enable the integration of realistic structure/function and provide a platform for understanding the significance of these factors on localized mechanics, such as strain or stress.
To bypass many of the uncertainties associated with EHTs and delve into the underlying impact of structure and function, we propose the integration of a novel EHT platform with tissue-specific computational models. In this study, we leverage the experimental data obtained in the fibroTUG platform [24], which uses electrospinning techniques to generate synthetic, fibrous matrices with defined alignment and material characteristics. Imaging provides detailed information on the ECM and myofiber architecture on a tissueby-tissue basis. Using image-based modeling and data assimilation, we create in-silico twins of individual fibroTUGs and explore how different factors of the ECM and myofibril structure impact localized cellular function and the resultant force measures commonly reported in the literature. Through this integrated experimental-computational approach, we observe that the resultant forces of EHTs can be substantially biased by ECM alignment and mechanobiological factors driving myofibril formation and function.
The paper is structured as follows: in the methods section, we detail the process of creating the computational model from the experimental data. In the results section, we validate our model by comparing the simulation results to image-based algorithms and use experimental and non-experimental conditions to analyze the role of the different mechanical variables in the iPSC-CMs stress and force output relationship in EHTs. We follow with a discussion of the model results and how we can use computational modeling approaches to explore the mechanobiology of these tissues.
Briefly, dextran vinyl sulfone (DVS) fibers were electrospun onto an array of polydimethylsiloxane (PDMS) posts attached to a rotating mandrel. The bending stiffness of the posts k p can be tuned by altering the geometry of the posts and is measured experimentally, as described in DePalma et al. (2023) [24]. By varying the rotational velocity of the mandrel, the alignment of the fibers can also be controlled, resulting in aligned fibrous matrices that exhibit high anisotropy or random matrices that are more locally isotropic [7].
The DVS fibers present between posts were stabilized by exposing them to UV light. Upon hydration, the unstabilized fibers dissolve, leaving only the fibers suspended between two posts. Secondary crosslinking in solutions with varying concentrations of photoinitiator (LAP) resulted in matrices of varying stiffness (higher LAP results in stiffer matrices). The stiffness of the matrix was characterized by indenting the matrices incrementally, measuring the resulting post-deflection (and thus the post-force F p ), and then calculating the global strain of the matrix (Fig. 1B). Further, images of the indented DVS fibers were taken to record their organization and pair it with their force response. This setup allows us to tune and control the post stiffness (soft, k p = 0.41 N/m; stiff, k p = 1.2 N/m), fiber alignment (aligned or random), and fibrous matrix stiffness (soft, LAP= 0.1 mg/mL; stiff, LAP= 5.0 mg/mL), parameters that determine the mechanical environment where the iPSC-CMs develop [24]. While many permutations are possible, in this paper, we studied the following permutations: soft fibers/soft posts, stiff fibers/soft posts, and soft fibers/stiff posts for both aligned and random matrices, leading to a total of six conditions.
After defining matrix conditions, purified cultures of iPSC-CMs were seeded onto the fibroTUG and cultured in this environment for seven days. On day 7, time-lapse videos of the microtissue's spontaneous contractions were acquired (see Video S1). These videos were processed to obtain post-displacement curves as seen in Fig. 1C. Finally, immunofluorescence staining is used to image cell nuclei, titin, and the DVS fibers (see Fig. 1D).
Image Processing
This subsection details the process of extracting information from the DVS fiber and titin images ( Fig. 2) to obtain quantities that characterize the structure of the matrix and the myofibril network that can then be projected into a 2D fibroTUG model. DVS fibers. The processing starts by creating a mask of the fibers that is used to compute the local fiber density ρ f , alignment f 0 , and dispersion κ f as shown in Fig. 2A. The methods used to compute these quantities are detailed in Supplementary Information Section S1. The fiber density ρ f takes values between 0 (no fibers) and 1 (fiber), allowing us to define the mechanical presence of fibers. The fiber alignment vector, f 0 , provides the direction of the local stiffness anisotropy in the direction of the fibers. Finally, the dispersion κ f is a parameter used in continuum models [25] to represent regions where fibers follow a distribution around a mean vector. In our case, it allows the local stiffness to move from anisotropic (no dispersion, κ f = 0) to isotropic (κ f = 0.5) depending on the local fibers.
Titin. Myofibrils were identified via immunofluorescent imaging of iPSC-CMs containing a titin-GFP reporter, which allows for the visualization of the sarcomeres' Z-discs [26]. When images of titin were available for the whole domain, tissue-specific fields describing the structure of the myofibril network were computed. This was done following a similar strategy to the processing of the DVS fibers, where the titin images were used to find the myofibril density ρ m , alignment m 0 , and dispersion κ m (Fig. 2B). The steps to compute these quantities from the images are presented in Supplementary Information Section S2. The myofibril density ρ m (taking values from 0 to 1) allows us to define the contractile regions. The alignment vector m 0 defines the direction of the contraction and the dispersion κ m activates an isotropic contraction in regions of unorganized myofibrils.
To obtain the visualization of the titin in the whole tissue shown in Fig. 1D, several smaller images were stitched together since a high magnification is needed to have a clear view of the Z-discs. This is a slow process, and since the main objective was to quantify myofibril structural characteristics, we decided to accelerate the process by only imaging the center of the tissues and statistically characterize the myofibril alignment and density. Given the six different experimental conditions, the myofibril orientation was analyzed and compiled for all the images available (N ≥ 7 per condition). A Von Mises distribution [25] was fit to the histogram of angles (measured relative to the post-to-post direction, see Supplementary Information Section S2.1 for more details). The distribution is characterized by a parameter ξ (assuming the mean is zero), with high values of ξ indicating myofibrils oriented preferentially in the post-to-post direction. The resulting probability density functions (PDFs) and the data histograms are shown in Fig. 3A. The myofibril density was computed for these partial images and then a mean value was computed per condition (see Fig. 3B). The mean density values were then normalized by the aligned, soft fibers/soft post condition value. Table 1 shows the final density values used. Given a fiber network and geometry, we used the probability fitting and the density measurements specific for each condition to generate computational myofibril fields that followed these two parameters on top of the fiber fields. The procedure to create these fields is shown in Supplementary Information Section S3. One important thing to notice is that whenever this approach was taken, no myofibril dispersion was considered (κ m = 0), as it is difficult to generate computationally in a meaningful way. The impact of these considerations was studied in Section S3.1 and S3.2 of the Supplementary Information.
Biomechanical Model
To model microtissue mechanics, we used a constrained mixture continuum mechanics framework [27,28,29]. Due to the thinness of the fibroTUG tissues (∼ 12µm compared to the ∼ 400µm in length), we consider the tissue domain Ω ⊂ R 2 . The boundary at the post is denoted by Γ p . The reference coordinates of a given point in Ω are denoted by X. Under internal and external loads, this point moves to a deformed position x = X+u, where u is the displacement field. The deformation gradient tensor F = ∇u+I describes the deformation of the material with respect to the reference coordinates and J = det F the relative volume change [28]. We further define C = F T F as the right Green Cauchy deformation tensors. Constitutive relations are often expressed in terms of the invariants of C [30]. In this work, we considered the following, whereĪ 1 is the isochoric version of I 1 [31]. Furthermore, invariants describing the deformation in the fiber and myofibril direction are, and these are further modified to account for local dispersion κ f , κ m [25,32], When the fiber dispersion κ f = 0, I * 4f = I 4f , i.e., the local response of I * 4f is fully anisotropic, whereas when κ f = 0.5, I * 4f = I 1 /2, and the material behaves locally as a fully isotropic material. The same is true for the myofibril invariant I * 4m .
As mentioned in Section 2.1, a fibroTUG tissue consists of two components, DVS fibers and iPSC-CMs.
We assumed the two components work in parallel and, therefore, the strain energy density of the tissue Ψ is given by the sum of the energy of the fibers Ψ ECM and the iPSC-CMs Ψ CM , The strain energy function Ψ ECM is given by the strain energy density describing the fiber mechanics Ψ f and a term that delivers numerical stability on the areas where there are no fibers Ψ st , The first term corresponds to a modified neofiber material law [33] and integrates the structural information obtained from the DVS fiber images, where C 1 is the stiffness in the fiber direction and C 2 is the isotropic stiffness. This formulation was chosen since the experimental data from the passive stretching of the fibers (Fig. 1B) showed a close-to-linear behavior that is well captured by this material law. The stabilization term is, The first term inside the parenthesis penalizes volumetric changes, while the second corresponds to the deviatoric term of a Saint-Venant Kirchhoff material. The parameters K = 10 −3 kPa and µ = 10 −2 kPa are chosen to be ≪ C 1 , so the mechanical response of the ECM is dominated by Ψ f . A sensitivity analysis confirmed that these parameters are minimally important (see Section S6 of the Supplementary Information for more details).
The contractile iPSC-CMs were modeled by a passive component representing the bulk stiffness of the cells (Ψ c ) and an active contraction component (Ψ a ), The passive component is modeled using the 2D version of the compressible Neohookean material law presented in Pascon et al. (2019) [34], which has the following strain energy density function, We use µ c = 2 kPa, which is close to values derived from stress-strain curves obtained from isolated iPSC-CMs [35]. Due to the 2D nature of the EHTs, the compressible modulus was assumed negligible (i.e. K c = 0) due to the ease for cells to deform out-of-plane under in-plane compression.
The active component is given by, Here, ϕ is a function that models the length-dependent behavior of cardiomyocytes, and it is taken from [36] (see Supplementary Information Section S4 for more details). The parameter η controls the magnitude of the activation in time of the iPSC-CMs. Note that we assume the activation only occurs where there are myofibrils (hence the multiplication by ρ m ), but the passive response of the iPSC-CMs, is present everywhere in the tissue. This is because iPSC-CMs nuclei appear evenly distributed in the images of the tissues, but only some develop myofibrils.
The total Cauchy stress σ is computed from Ψ using, and, under a quasi-static regime, the stress balance is given by, Throughout the paper, we assess the mechanics of the tissues using the active stress and the strain in the myofibril direction, where σ a = J −1 ∂Ψa ∂F F T .
Data Assimilation
Once the structure of each tissue is defined, the only unknowns remaining in the system are the material parameters of the constitutive law of the fibers, C 1 and C 2 , and of the active component of the iPSC-CMs, η. For simplicity, we considered these parameters to be constant within a fibroTUG. To find these tissue-specific values, we assimilate the functional data shown in Fig. 1C-D using a parameter identification strategy introduced by [20]. Briefly, this technique integrates material parameters into the set of state variables, enabling the addition of constraints to match measured data. This method allows solving for both the displacement field and additional material parameters in the same forward simulation. In our case, we divide the assimilation into two steps. First, we use the data from the fiber indentation test (Fig. 1C) to find the stiffness parameters of the fibers, and then we use the post-displacement trace (Fig. 1D) to find the active parameter, η. The corresponding boundary value problem equations are described in Section S5 of the Supplementary Information.
Model validation
The objective of this first set of simulations was to assess the ability of the proposed pipeline and model to capture fibroTUG mechanics. To do so, three complete datasets of the soft fibers/soft post condition were analyzed. The fiber stiffness was set to the mean value found for the fibrous samples of this condition, C 1 = 3.84 kPa (see Section 3.2). By design, the post-displacement (and, therefore, the post-force) data were exactly matched by the simulations as shown in Fig. 4A. The data assimilation enabled us to identify the parameter η that reproduces the post-displacement and force curves. Fig. 4B shows the mean σ a,m at each time point for each tissue. To assess the ability of the model to capture local deformations, we validated the results of the simulations against the results from MicroBundleCompute, an image tracking software developed specifically for measuring the internal displacements of contracting EHTs [37]. For the three tissues, correlation plots were calculated between the predicted displacements of the simulation and the measured displacements from the video. Fig. 4C and D show the measured and predicted displacement fields in the post-to-post direction (Y) and the perpendicular direction (X) for one of the three datasets.
Active stress prediction for experimental conditions
In this section, we focus on understanding the effect of different mechanical environments on the iPSC-CMs ability to exert contractile stress. Images of five fibrous matrices for each condition were processed For instance, in the aligned case, the magnitude of both F p and σ a,m are highest in the soft fibers/soft post-condition and lowest in the case with stiff fibers. However, the relative differences between conditions in σ a,m are lower compared to F p . For example, the force output drops from 3.01 µN in the soft fibers/soft post case (grey bars in 5A) to only 1.19 µN in the case with stiffer fibers (red bars in 5A), which corresponds to a -60.5% relative change. Contrarily, the maximum σ a,m falls from 2.45 to 1.55 kPa between these two cases, only a -36.8% variation.
Isolating the effect of mechanical variations
To investigate the origin of the F p /σ a,m relative differences, we performed several simulations outside the parameter range of current experimental conditions. Specifically, we studied variations in post stiffness, fiber stiffness, fiber alignment, and myofibril density. To do so, we took a single post-force curve (Fig. 6A) and computed the active stress needed to generate that force output given a certain parameter set. In this experiment, higher values of σ a,m indicate that the tissue is less efficient in transmitting cell stress to force output as this was held constant across all conditions. We performed this test using five random matrices and five aligned matrices with five computationally generated myofibril fields each (Fig. 6B). The alignment of myofibrils was taken from the soft post/soft fibers case (gray curves in Fig. 3A). The stiffness in the fiber direction was set to be C 1 = 3.84 and C 1 = 19.47 kPa, for the soft fiber and the stiff fiber case, respectively.
First, we considered a constant and uniform myofibril density ρ m = 1. Results of this experiment are shown in Fig. 6C, for aligned matrices and Fig. 6E, for random matrices. On aligned matrices, the active stress necessary to generate the force output represented in Fig. 6A was 2.12, 2.60, 1.95 kPa for the soft fibers/soft post, stiff fibers, and stiff post conditions, respectively. Similar relative trends are observed for the random case, although with higher values compared to their aligned counterparts. Second, to study the effect of myofibril density, we considered the case where ρ m = ρ data , which is the specific density computed from the data (shown in Table 1). Fig. 6D shows the results of these simulations. The magnitude of σ a,m is elevated in those cases with lower ρ m , when compared with the ρ m = 1 simulations for both aligned and random matrices.
Discussion
In this work, we developed a pipeline to generate data-driven computational models of EHTs. By combining comprehensive experimental data with biomechanical computational models, we are able to model EHT mechanics and compute metrics to estimate iPSC-CM function. The pipeline developed allows us to model the explicit fiber structure and myofibril network and to infer the mechanical properties of these components from functional experimental data. This is facilitated by the use of the fibroTUG platform, which provides great control over the mechanical environment and enables obtaining detailed imaging and functional data [24]. The biomechanical computational model augments the analysis and conclusions obtained from the experiments. It also enables the test of conditions that are experimentally not feasible to achieve, allowing us to decipher the effect of the different variables of the system in the tissue mechanics.
Below, we discuss the results of the different simulations performed using the model.
Model Validation
For the three validation datasets, the correlation between the displacements predicted by the simulations and the measurements of the MicroBundleCompute software in the post-to-post direction is strong, with high R 2 values (mean 0.938) and slopes close to 1 (0.967, 1.047, 0.970 for the three tissues, see Fig. 4E). The correlations in the perpendicular direction are also positive but with slopes further from 1 (1.495, 0.694, 0.623 for the three tissues) and a mean R 2 = 0.372 (see Fig. 4F). The decrease in correlation in the X directions is explained by several reasons. First and foremost, the videos are taken using brightfield imaging [24], which mainly shows the fiber's deformations. Our modeling approach uses a constrained mixture framework, where the kinematics of fibers and cells are homogenized, meaning that the computed displacements represent the average displacement of both these components. Furthermore, the displacements in the X direction are smaller and harder to track as demonstrated by a simple exercise where two people track different features across the contraction cycle. This test showed a decrease in the measured displacement correlation, from R 2 = 0.9 in the Y direction to R 2 = 0.72 in the X direction (see Supplementary Information Section S7 for more details). For these reasons, we believe that imaging-to-modeling comparison is not direct, but it does allow us to confirm that our model is meaningfully capturing the main features of fibroTUG kinematics.
Active stress prediction for experimental conditions
In Fig. 5, the computational pipeline was used to assess the mechanics of fibroTUGs under different mechanical environments. Both F p and σ a,m show significant differences between the conditions (Fig. 5A- Since σ a,m is a value that considers the structure of all the tissue components, the differences in this value indicate that modifying the mechanical environment where the cells develop will influence their contractile maturity. These differences are also captured by F p , but the relative differences are influenced by a multitude of factors, including fiber mechanics, myofibrillar density and alignment, and contractile maturity. This proves that the force output is the reflection of different variables in the system, not only the iPSC-CMs active stress. This highlights the importance of giving proper context to force output, which is often treated as a direct surrogate of iPSC-CM stress [6,8,38].
Isolating the effect of mechanical variations
One key advantage of using computational models to study EHT mechanics is their flexibility to study scenarios that are difficult, if not impossible, to create in-vitro. These simulations can shed light on the influence of different variables of a system by allowing us to change their values individually and measure the variation in an output of interest. We performed controlled simulations where one mechanical variable was modified at a time to assess the changes in the F p /σ a,m relationship. In Fig. 6C and E, we can see that when ρ m = 1, changing the ECM stiffness and the boundary conditions will have a slight impact on the capacity of the tissue to translate cell active stress into force output. For example, for stiffer fibers, a higher amount of the work done by the iPSC-CMs is lost in tugging the less deformable matrix. Conversely, when the post is stiffer, the magnitude of σ a,m is slightly lower, corresponding to a more efficient transduction of myofibrillar stress to the total output force.
When ρ m = ρ data , the conditions with lower myofibril formation need to generate higher values of σ a,m . This is not surprising because there will be less myofibril area to generate the same force, which means that the existing sarcomeres need to generate more stress to compensate. However, our model allows us to measure that effect and compare it with the effect of other mechanical variations. The observed changes due to poor myofibril formation are much higher than those observed due to post or fiber stiffness (observed by the change of the bar plot magnitude between Fig. 6C-E and Fig. 6D-F). Furthermore, the cases that have lower ρ m (and that need higher σ a,m to generate the same force) correlate with the cases that have lower force output experimentally (Fig. 5A-B). This indicates that myofibril density is one of the most important mechanical parameters explaining EHT force output. The importance of myofibril formation has also been observed in single-cell models [39].
When evaluating the effect of fiber alignment, we can see that the magnitude of σ a,m is higher in random than aligned matrices. This makes sense, as in these cases, both fibers and myofibrils are less aligned in the post-to-post direction (see, for example, the PDFs in Fig. 3A). Here, a lot of the work performed by the iPSC-CMs gets lost in pulling the transverse direction. In Fig. 7, we decouple the alignment of fibers and myofibrils with the intention of understanding the relative importance of myofibril alignment versus fiber alignment. The results show that aligning the myofibrils on top of a random matrix (Am-Rf) or vice-versa (Rm-Af) only explains a small part of the difference between the all-aligned (Af-Am) and the all-random condition (Rf-Rm). For this reason, we conclude that, for the range of observed variations in myofibril alignment, the matrix structure has a higher effect on the force output. It could be that other mechanical environments generate a higher disarray of myofibrils making this parameter more dominant.
The results of the model complement nicely with other biomarkers studied in this platform [24]. For example, the aligned soft fibers/soft post condition showed a higher proportion of connexin 43, more mature forms of myosin (MLC-2v), and lower beats per minute, to mention a few. Our model shows that this condition and the aligned soft fibers/stiff post are the most efficient in transmitting force. However, the case with soft posts presents much higher myofibril strains (see Fig. 6C), which has shown to be important in iPSC-CM maturation [40]. These observations show the benefits of using a combined computationalexperimental approach to understand different aspects that are involved in the EHT function and how they relate to the iPSC-CM maturation.
Limitations
The computational generation of myofibril fields enabled the flexibility to explore non-experimental scenarios to understand the importance of myofibril alignment. However, generating myofibrils networks that are representative of real sarcomere organization is challenging. We performed several tests to assess our approach using the validation datasets. We tested the importance of including myofibril dispersion κ m and studied the prediction error induced when using computationally generated myofibril fields instead of imaged-based ones. Detailed results of these tests are shown in Supplementary Information Section S3.
Briefly, we showed that not including κ m will mainly impact the deformations in X, with these simulations showing less necking than expected. However, the active stress prediction remains very similar. The test of the artificial myofibril fields showed that the sarcomere strain values are the most affected quantity, but the active stress prediction is, again, very similar to the one computed using image-generated myofibril fields.
The results of these tests show that we can use our proposed method to correctly assess changes in σ a,m .
A few assumptions were made in the model not based on the experimental data. The passive stiffness of the iPSC-CMs, isolated from the fibers, was not measured in our experiments. This parameter is tricky to measure, as it is known that the cell will change its stiffness during development [41] and that this value also depends on the temperature and the state of its contractile apparatus [42]. Therefore, for simplicity, we considered the passive stiffness of the cells to be 2 kPa, a value similar to the one measured by Ballan et al. [35] in iPSC-CMs, and lower than what is usually measured in mature CMs [43]. We further assume that the passive response of the cell was isotropic. This is because no studies were found on the anisotropy of iPSC-CMs and since the iPSC-CMs considered in this study are still immature (thus, the cytoskeleton is less organized), we believe that our assumptions of an isotropic, less stiff passive response than in adult CMs are reasonable.
Another modeling assumption is that all iPSC-CMs have a synchronous activation and that all their contraction can be modeled using a single η parameter. This means that the spatial heterogeneity observed in the σ a,m fields in Fig. 5C,F are the product of only the length-dependent function ϕ in Eq. (6). The simultaneous activation assumption is backed up by these tissues being very small, and the calcium transients measured experimentally did not show spatial differences [24]. However, it is possible (and probably expected) that not all the iPSC-CMs in one tissue have the same maturity, which could be modeled by having an η field instead of a single scalar value. For example, instead of only using the post-displacement/force as a constraint, we could use the displacement measurements from the MicroBundleCompute software to find a local η. Different strategies to do so will be explored in the future. Nevertheless, we believe that using a single η parameter allows us to compute an average activation and that the overall regional differences will not affect the conclusions of this paper.
Finally, we modeled the fibroTUG in 2D, which also forces us to consider a material with low compressibility. This decision is based on the low thickness to length ratio that is ∼ 3%, and because only 2D projected images were available for the fibers, myofibrils, and displacement tracking. Future studies will aim to generate a pipeline to reconstruct the full tissue geometry from 3D stacks obtained from high-magnification imaging techniques and assess the validity of the 2D model.
Conclusions
This work introduces a data-driven biomechanical computational model of EHTs. The implemented pipeline leverages the experimental data of the fibroTUG platform, which was introduced specifically to study iPSC-CMs function due to mechanics. With the model, we are able to measure the iPSC-CM active stress that replicates the experimental observations. Unlike the tissue force output, this value is a direct measure of iPSC-CM contractile function. We used the model to assess the effects of different mechanical environments on iPSC-CMs stress generation. The results followed the same trends as the force output, indicating maturation differences, but with lower relative differences. This highlights that EHT force generation is the product of different variables besides iPSC-CM contraction. We explore this fact with our model and show that myofibril density is one of the key factors explaining the experimental differences observed.
The developed framework opens the door to many other applications in the cardiac tissue engineering field, and it shows how a combination of in-silico with in-vitro approaches can help us better understand the mechanobiology of EHTs. An aligned and a random matrix were built from the images, and computationally myofibril fields were created using the corresponding PDF. Results for an aligned condition (C) with ρm = 1, and (D) with ρm equal to the specific density obtained from data for each case (ρ data ). (E) and (F) show the results for random matrices with ρm = 1 and ρm = ρ data . The values below the bar plots in (D) and (F) correspond to the mean density used in these cases (see Table 1). All data presented as mean ± standard deviation. * p < 0.0001 by unpaired t-tests. Figure 7: Procedure for the myofibril vs fiber alignment test. Simulated conditions that have aligned fibers with random myofibrils (Af-Rm) and vice-versa (Rf-Am) in-silico tissues are generated by using fibrous matrices generated from images and the PDFs describing myofibril alignment. These simulations are compared with the aligned (Af-Am) and random (Rf-Rm) experimental conditions. (B) Mean σa,m at maximum contraction for the Af-Am, Af-Rm, Rf-Am, Rf-Rm, simulations using soft fibers/soft posts. All data presented as mean ± standard deviation. * p < 0.0001 by unpaired t-tests. | 2023-06-02T01:15:46.997Z | 2023-05-31T00:00:00.000 | {
"year": 2023,
"sha1": "b20b18c9627c59f2fc6052c2e1f98eed1d822769",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b20b18c9627c59f2fc6052c2e1f98eed1d822769",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
249182858 | pes2o/s2orc | v3-fos-license | Developing an understanding of sophorolipid synthesis through application of a central composite design model
Summary A key barrier to market penetration for sophorolipid biosurfactants is the ability to improve productivity and utilize alternative feedstocks to reduce the cost of production. To do this, a suitable screening tool is required that is able to model the interactions between media components and alter conditions to maximize productivity. In the following work, a central composite design is applied to analyse the effects of altering glucose, rapeseed oil, corn steep liquor and ammonium sulphate concentrations on sophorolipid production with Starmerella bombicola ATCC 222144 after 168 h. Sophorolipid production was analysed using standard least squares regression and the findings related to the growth (OD600) and broth conditions (glucose, glycerol and oil concentration). An optimum media composition was found that was capable of producing 39.5 g l–1 sophorolipid. Nitrogen and rapeseed oil sources were found to be significant, linked to their role in growth and substrate supply respectively. Glucose did not demonstrate a significant effect on production despite its importance to biosynthesis and its depletion in the broth within 96 h, instead being replaced by glycerol (via triglyceride breakdown) as the hydrophilic carbon source at the point of glucose depletion. A large dataset was obtained, and a regression model with applications towards substrate screening and process optimisation developed.
Introduction
Chemical surfactants are essential to our modern life, appearing in some form in nearly every market sector, with applications in medicine, food, industrial processing, cosmetics, personal care and home care. Derived from petroleum, these surfactants are being increasingly scrutinized due to their persistence and detrimental effect on the environment and wildlife (Langberg et al., 2019;Kaczerewska et al., 2020). Biosurfactants are an ecofriendly, non-toxic and biodegradable alternative produced by microorganisms that have potential to disrupt the surfactant market, which has forecasted growth of up to 52.4 billion USD by 2025 (MarketsandMarkets, 2020). Sophorolipid biosurfactants present the greatest potential for market entry due to the high levels of productivity that can be gained from the wild-type strain S. bombicola ATCC 22214, reaching levels > 300 g l -1 (Rau et al., 2001). A glycolipid biosurfactant, sophorolipids are composed of disaccharide sophoroses (2-O-β-dglucopyranosyl-d-glucopyranose) linked to a hydrophobic terminal/sub-terminal hydroxylated fatty acid group via a glycosidic bond (Nuñez et al., 2001).
Despite this, market entry is hindered by the high costs of production attributed to the cost of food-grade feedstocks (Ashby et al., 2013). Furthermore, food-cropderived feeds directly compete with the agricultural industry, causing greater economic instability to the areas where crops are grown (Hertel et al., 2012;Winchester and Reilly, 2015;Gerbens-Leenes, 2018). Primarily, sophorolipid production is reliant on a nitrogen source (for cell growth), a hydrophilic carbon (for production of the sophorose monomers) and a hydrophobic carbon (for production of the fatty acid group). As such, there is a need to select alternative feedstocks rich in the required nutrients that are low cost and do not compete with food crops (for land use/direct consumption). The high number of potential feedstocks quickly becomes prohibitive to screen in bioreactors, so most are typically screened at shake flask scale. Whilst overall production is reduced due to limitations in mass transfer and reduced control of fermentation conditions, shake flask screening provides an excellent directional tool for identifying potential feedstocks. This approach has been applied to the screening of numerous potential feedstocks including waste frying oils, non-feed crop oils and hydrolysed biomass (Fleurackers, 2006;Wadekar et al., 2012;Kaur et al., 2019;Marcelino et al., 2019).
However, when attempting to screen feedstocks there is a poor understanding within the literature over the maximum potential productivity that can be gained at a given scale/model that can be used as an internal control for which to compare against. Whilst some papers apply controls with food-grade feedstocks to compare against, they typically do not demonstrate that their control is at an optimum for that scale. By demonstrating the maximum productivity of a given scale through a 'best-case' control, the efficacy of a potential feedstock can easily be assessed. In some cases, no controls are applied, instead opting to refer to other works where different scales/media compositions and controls have been applied, which reduces the ability to identify whether any beneficial effect on production is caused by the feed or the difference in scale/ process. Similarly, alternative feedstocks are often replaced 'like-for-like' with their food-grade counterpart in media that may have been optimized for production despite differences in composition (nutrients, fatty acids, etc.), quality and the presence of potential inhibitors. An understanding of the media components in a fermentation media, their purpose and effect on productivity and potential interactions is essential to effectively screen a potential feedstock. By understanding this, the media composition can be altered to bolster the production potential of a feedstock and avoid it being tested in suboptimal conditions that may mask its use.
The application of statistical design of experiments provides an understanding of media component effects by creating structured experimental conditions where media components (factors) can be varied by concentration (levels) to generate a design region/space where the changes in productivity (output) can be analysed. Response surface methodologies (RSM) dictate a specific design that models the relationship between factors and the output once a statistical model is applied to estimate linear, quadratic and cubic curvature and two-factor interactions. As such, the aim of this work was to apply an RSM to test the composition of a simple fermentation media for sophorolipid production, targeting the hydrophobic and hydrophilic carbon and nitrogen sources. A variety of RSMs can be applied for this purpose; however, a central composite circumscribed design (CCCD) was chosen due to its low prediction variance and efficient estimation of linear and quadratic interactions, which allow more accurate profiling of each factor (Witek-Krowiak et al., 2014;Jankovic et al., 2021). In order to relate the findings of the statistical model to the underlying biological function of S. bombicola ATCC 22214, broth conditions were monitored and the changes observed (substrate consumption, cell mass and productivity) related to the profiles found in the regression model.
In order to select components and gain an understanding of suitable conditions for high SL production, a literature review was performed. Table 1 highlights the results of the literature search, identifying works that used high-quality/food-grade feedstocks in a batch/fedbatch model with no influence from external production modifications (i.e. gravity separation). The high levels of production gained by Rau et al. (2001) and Davila et al. (1992) led to the selection of glucose, rapeseed oil, corn steep liquor (CSL) and ammonium sulphate for application in the model. From this, the CCCD design was applied and model subsequently developed to determine the key components for SL production, with analysis of the fermentation broth to relate changes in composition (glucose, glycerol and rapeseed oil) to the production achieved after 168 h incubation.
Results
Using a central composite circumscribed design model, a series of 50 ml fermentation flasks were incubated over a 168 h period to determine the effect of glucose, rapeseed oil and CSL/ammonium sulphate on SL production. The following section describes the initial findings and the iterative process that the model was taken through, exploring the initial central composite model, augmentation and subsequent improved regression with a wider dataset. In addition, at-line samples were taken and quantified to understand the growth and the metabolic consumption profile of S. bombicola. The full data set with actual and predicted values can be found in Table S1.
Design 1 -Overview of production As shown in Fig. 1, the applied levels of glucose, oil and nitrogen in the factorial and axial ranges of Design 1 were capable of producing a range of sophorolipid quantities at harvest from 11.3 to 39.5 g l -1 SL. The centrepoint ('000') combination, based on literature concentrations of each media component, was capable of producing a mean of 19.3 g l -1 sophorolipid. Amongst the three media components, alteration of the nitrogen had the greatest effect to production. By changing the nitrogen concentration from 5 to 0.8 g l -1 CSL and retaining the centrepoint values for glucose and oil, the fermentation was able to reach 39.5 g l -1 SL ('00a'), producing a significantly different value from all other tested combinations in the design (highlighted by the comparison circles). Alteration of the quantity of rapeseed oil in the media also had a discernible effect on SL production. As oil was reduced below 50 ml l -1 ('-' of the pattern), production began to decline, with axial values (15.9 g l -1 oil, 0a0) only capable of producing 11.3 g l -1 SL. Inversely, by altering levels from 50 to 100 g l -1 rapeseed oil ('---' to '-+-'), it was possible to show a statistically significant increase in production from 17.1 to 23.7 g l -1 SL (P = < 0.05).
Glucose did not show statistically significant differences to SL production in the ranges tested in Design 1 (15.9-184 g l -1 ). Whilst there appears to be a boost in production between the combination '-+-' and '++-' from 23.7 to 27.2 g l -1 , this is not statistically significant (P = < 0.05).
Design 1 -Regression analysis
Following completion of the fermentation flasks of Design 1, the final SL concentrations were inputted into JMP and linear, quadratic and linear interaction terms selected through stepwise regression for the term combination with the lowest BIC value. For details on the Davila et al. (1992) Casas and García-Ochoa (1999) Hydrophilic carbon source 100 g l -1 glucose 100 g l -1 glucose 100 g l -1 glucose 100 g l -1 glucose 100 g l -1 glucose Hydrophobic carbon source 50 g l -1 rapeseed oil 100 g l -1 (50% C18:1) rapeseed oil, refined 40 g l -1 oleic acid 100 g l -1 rapeseed oil ethyl ester 100 g l -1 sunflower oil Nitrogen source 6 g l -1 yeast extract, 5 g l -1 peptone 4 g l -1 (NH 4 )2SO 4 , 5 g l -1 CSL 10 g l -1 yeast extract, 10 g l -1 urea 4 g l -1 (NH 4 )2SO 4 , 5 g l -1 CSL 1 g l -1 yeast extract Fig. 2A. The standard least squares regression demonstrated an R 2 adjusted value of 0.77 and root-mean-square error of 3.29 g l -1 SL. The high production value of '00a' (40 g l -1 ) was marked as an outlier in studentized residual (exceeding 95% individual t-limits) and actual/predicted plots (see Figs S2-S3). As shown in the plot ( Fig. 2A), nitrogen and oil both had a significant effect on the production of sophorolipids, demonstrating quadratic curvature and an interaction between the two terms.
The results of Design 1 indicate that the glucose range tested did not demonstrate a significant effect on the final sophorolipid concentration. From the literature, it has been shown that there should be a point where production begins to decline as the glucose concentration is reduced. As such, it was assumed that the concentration range initially tested was not capable of producing this effect, requiring further reduction to find the minimum substrate requirements.
A boost in SL production was observed when the nitrogen concentration was reduced to the axial value (0.8 g l -1 CSL, 0.64 g l -1 ammonium sulphate). This value was the lowest concentration of nitrogen tested. As such, it would be impossible to conclude that the nitrogen concentration found in Design 1 was truly optimum for SL production. Furthermore, this concentration of nitrogen was only tested against a single concentration of glucose and oil (pattern '00a'), with no indication of how variation of these two components would alter SL production. As such, it was decided that a second central composite design (Design 2) would be performed, reducing the glucose and nitrogen concentrations further to augment the current data set and apply further regression with higher-order terms (cubic curvature and quadratic-linear interactions).
Design 2 -Overview of production
Following augmentation of the central composite design, a series of flasks were run to explore a range of reduced glucose (3.07-61.93 g l -1 ) and nitrogen concentrations (0.1273-1.4727 g l -1 CSL, 0.102-1.178 g l -1 ammonium sulphate) in the hope of further maximizing SL production and determining the concentrations at which SL production would begin to decline for each media component. The results of the Design 2 flasks are shown in Fig. 3. Overall, Design 2 flasks show a higher level of SL production than Design 1, with the majority exceeding 27 g l -1 SL, up to a maximum of 39.42 g l -1 ('A00'). Of the combinations tested, two demonstrate statistically comparable values (as shown by comparison circles, P = < 0.05) to the optimum flask combination found in Design 1 ('00a'), with '++0' and 'A00' with an average of 38.23 and 39.42 g l -1 SL respectively. Similar to Design 1, the alteration of nitrogen had the greatest impact on production, with factorial flasks with higher concentrations of nitrogen (patterns '--+', '+-+', '-++' and '+++') demonstrating statistically lower SL production (Student's t-test, P = < 0.05) compared to their counterparts with reduced levels (e.g. '--+' to '---' and '+-+' to '+--'). Flasks with rapeseed oil at 50 ml l -1 ('-' of the pattern) did see some reduction in the production of SLs; however, in general the levels of SL production in these flasks were higher than those in Design 1, where 50 ml l -1 oil was used, ranging from 24.33 g/L ('---') to 35.56 g/L ('+--') SL. At axial levels of oil (15.9 ml l -1 ), only 13.1 g l -1 SL was produced. Unlike Design 1, the range of glucose tested in Design 2 (3.07-61.93 g l -1 ) was such that it was possible to observe a statistically significant decline (Student's t-test, P = < 0.05) in SL production when glucose levels were reduced (excluding in patterns '+++' to '-++'). However, even with 3.07 g l -1 glucose ('a00') flasks were still capable of producing an average of 32.43 g l -1 SL.
Combined design regression
The combination of data from Design 1 and 2 provides 69 SL production values from 34 unique combinations of glucose, rapeseed oil and CSL/ammonium sulphate, to which a regression model can be applied. With this larger data set, higher-order terms were included in the regression (compared with the data set from Design 1 only) to determine the significance of cubic curvature and quadratic-linear interactions and to improve the predictive model. For details on the regression equation and model fit, see Figs S4-S6 and Table S2. The selected significant model terms, following stepwise regression, are shown in the Pareto plot in Fig. 2B. As shown in Table 2, the combined regression model produced showed an improved adjusted R 2 and RMSE from the regression of Design 1, with the previous outlier (Design 1, '00a') sitting along the predicted/actual regression line and within the confidence limits of the studentized residuals (see Fig. S6). Two patterns, Design 1 '+--' and Design 2 '00a', were possible outliers in the regression as they exceeded 95% individual t-limits. In addition, replicate variance was different between the two (Design 1 '+--', σ = 4.33 g l -1 , Design 2 '00a', σ = 0.6 g l -1 SL); however, as neither exceed Bonferroni residual limits (95% simultaneous) they were retained in the regression. The augmented regression model retained identical terms to Design 1, namely in the significance, curvature and interaction between the oil and nitrogen terms. The increased ranges tested in the model for nitrogen and glucose did lead to some changes in the significance. The linear term of nitrogen has an increased LogWorth value from 7.123 (Design 1) to 11.306 (Design 1 and 2 combined), whilst glucose was found to present quadratic curvature over the tested range. Whilst the linear term for oil is of low significance in Design 2, this is because its significance is contained within the cubic curvature (Oil*Oil*Oil) term. However, glucose still did not demonstrate statistical significance towards the production of sophorolipids, being retained in the model due to the significance of the quadratic term. The addition of new terms in the regression identified cubic curvature for rapeseed oil, with the addition of a quadratic-linear interaction between the oil and nitrogen (Oil*Oil*Nitrogen).
Surface plots were generated to demonstrate the interaction between any two of the media components with their effect on the final sophorolipid concentration at 168 h shown in a 3D space. Figure 4 shows the effects of altering the concentration of two given media components on SL production. As shown, alteration of the components changes the shape and contour of the surface (with respect to the final sophorolipid concentration), allowing for an 'optimum point' to be found, as marked by the intersecting lines on the surface plots. Looking at the individual effect of each component, the plots demonstrate that SL production increases as nitrogen is lowered, up to the limits tested in the model (0.13 g l -1 CSL, 0.104 g l -1 ammonium sulphate). Conversely, a sharp rise in SL production is seen as oil concentration increases, up to an optimum point close to the centre of the tested range (112 ml l -1 ). Glucose demonstrates a similar upward trend towards an optimal point (108 g l -1 ); however, the effect of moving from the least (3.06 g l -1 ) and most optimum (108 g l -1 ) concentration has a lower effect on production (33.38 g l -1 to 40 g l SL) compared with oil.
As shown in the model output/Pareto plot, glucose did not interact with the other terms of the model. This is similarly reflected in Fig. 4A and B, where the contour of the surface (shown on the surface mesh) along the glucose axis retains its profile (a quadratic curvature), regardless of the nitrogen and oil concentration. Similarly, the same relationship is shown when nitrogen and oil are compared against changing glucose concentrations, retaining their curvature (quadratic and cubic respectively). It should be noted that whilst the cubic profile of oil is not pronounced in Fig. 4B, this is due to the nitrogen concentration being set to the predicted optimum (0.127 g l -1 ), which reduces the amount of curvature. This curvature is more pronounced at higher levels of nitrogen (8-10 g l -1 ) in Fig. 4C on the surface plots, with the surface profile of the oil surface changing as the nitrogen concentration is altered. Similarly, the nitrogen demonstrates a sharp quadratic curvature upwards as concentrations reduce when oil concentrations are > 100 g l -1 ; however, this curvature becomes much shallower as the oil concentration is reduced.
The predictive model generated from the regression model was able to determine the optimum concentrations of glucose, oil and nitrogen that were capable of producing a theoretical sophorolipid concentration of 40 g l -1 (Fig. 5). This model provides a 2D interpretation of the surface profile of each individual component as their concentration is adjusted (with the other 2 components maintained at the optimum concentration), matching the highlighted surfaces on Fig. 4 (shown by the white dashes).
Interaction between media components and sophorolipid production
Alongside final sophorolipid concentration, residual rapeseed oil at harvest was quantified and used to determine the amount of oil consumed. Figure 6 demonstrates the relationship between the rapeseed oil consumption of flasks in Design 1 and their final sophorolipid concentration. As shown, flasks with low levels of oil (< 50 ml l -1 ) show no residual oil at harvest (100% consumed) regardless of nitrogen concentration (Group 1), with all of them demonstrating low sophorolipid production (< 20 g l -1 SL). As the oil concentration is increased (> 100 ml l -1 ), none of the flasks fully consume the available oil (excess oil) and the final sophorolipid concentration increases as the nitrogen level is reduced from 5 g l -1 CSL (Group 2, bottom square), to 2.5 g l -1 CSL (Group 2, top square) until it finally reaches 0.8 g l -1 CSL where peak sophorolipid production is seen.
Influence of media components on growth and media composition
In order to link the statistical trends seen in the regression model to the actual conditions within the fermentation broth growth (OD 600 ), glucose and glycerol were quantified in selected flasks over the 168 h period. Initial quantification of optical density was hindered by overestimation and high variance caused by components of the fermentation broth. The addition of an ethanol cleaning step led to a quantification method with low variability that was applied to the axial block (all flasks tested containing axial 'A' or 'a' values) of Design 2. As such, this block was chosen to analyse growth, with the additional quantification. The effect of glucose, nitrogen and oil on the optical density, glucose and glycerol profiles of the sophorolipid flasks is shown in Fig. 7, with quantification at 0, 24, 96 and 168 h. In general, all flasks demonstrated the same consumption/production profile for glucose and glycerol. Initially, small amounts of glucose were consumed between 0 and 24 h, at which point the consumption rate increased until glucose was fully depleted by 96 h. During this period, the glycerol concentration increased until glucose became fully depleted, at which point the glycerol began to be consumed as the concentration declined for the remainder of the fermentation. In all flasks, glycerol never reached 0 g l -1 during the 168 h fermentation period.
Initial glucose concentration (Fig. 7A) does not effect the growth of S. bombicola in the fermentation broth, with the exclusion of one group (3.07 g l -1 glucose) that, whilst lower initially, is able to match the growth levels of the other flasks by 96 h. Logically, increased initial concentrations of glucose lead to greater levels in the broth at 24 h. By 96 h, glucose is fully consumed regardless of initial concentration. Similarly, the higher initial levels Fig. 4. 3D surface plots comparing model terms (A) glucose and nitrogen (B) glucose and oil and (C) oil and nitrogen using the predictive formula produced following model regression. Surface plots are coloured from low (red) to high (green) final sophorolipid concentration at 168 h, with a mesh applied on the surface to show 3D topography. Predictive modelling was used to set each term to its predicted optimum value, as shown by the intersecting grid lines (black) The surface curvature at these optimum points (white dashes) is representative of the profile for each media component in the predictive model. As shown in Fig. 7B, varying the starting oil concentration does not initially cause a change in growth (24 h); however, growth begins to diverge from 96 h, with higher levels of oil leading to higher OD 600 values. Glucose levels are not influenced by changes in oil concentrations. However, higher levels of rapeseed oil lead to greater levels of residual glycerol in the fermentation broth up until the point of glucose depletion by 96 h. Following this, glycerol consumption occurs at the same rate regardless of initial oil concentration (as shown by gradient between 96 and 168 h: −0.012, −0.019 and −0.012 for 15.9, 100 and 184.1 ml l -1 respectively). Whilst initial growth (24 h) does not show statistically different OD 600 values (Student's t-test, P = < 0.05), increases in initial nitrogen (Fig. 7C) lead to increased growth by 96 h, which is sustained to the point of harvest (168 h). Whilst initially similar (24 h), glycerol generation separates by 96 h, with higher residual concentration in flasks with lower initial nitrogen. Glycerol consumption is similar between 0.8 and 1.4 g l -1 between 96 and 168 h (gradient = −0.033 and −0.04, respectively), but far lower at 0.127 g l -1 nitrogen/ CSL (gradient = −0.05). Residual glucose levels are initially similar (0-24 h) between flasks of different initial nitrogen, with 0.8 and 1.4 g l -1 CSL/nitrogen both reaching depletion by 96 h. However, flasks with 0.127 g l -1 nitrogen still demonstrate residual glucose at 96 h.
Discussion
The aim of this work was to develop a small-scale model of SL production that would better the understanding of the SL production process in relation to the media components and changes within the fermentation broth. Through the application of a CCCD response surface methodology, an optimized media composition could be found and a predictive model developed, with future applications in feedstock screening, tracking individual component influence and how they interact together in order to hinder/boost production.
Design rationale
A major design decision was to simplify the composition of the fermentation media to the three core constituents for sophorolipid biosynthesis, guided by the current understanding in order to easily link changes in production to a specific component. Nitrogen is essential in stimulating the growth phase of the fermentation, allowing for the accretion of SL producing cells, but there is a trade-off between gaining a higher cell density and the increased length of the growth phase. In order to generate the sugar moiety of the sophorolipid structure, UDP-glucose must be generated via glycolysis/glycogenesis, driven by a hydrophilic carbon source. Whilst more complex sources can be used, glucose provides the most direct/simple precursor to the glycolysis pathway, leading to a more productive SL fermentation (Van Bogaert et al., 2008;Rispoli et al., 2010;Bhangale et al., 2014;Konishi et al., 2016). The generation of the SL tail via terminal or sub-terminal hydroxylation of C16-C18 length fatty acids can be supplied directly (free fatty acids) or through lipid precursors (triglycerides) and has been noted to be the most vital part in gaining a productive process (Davila et al., 1994;Cavalero and Cooper, 2003;Felse et al., 2007). The high oleic acid content of rapeseed oil (61.6% in this work) and its relatively low cost (particularly in comparison with pure fatty acids) made it an obvious choice (Daverey and Pakshirajan, 2009;Van Bogaert et al., 2010). The supply of oil must be well controlled as it can easily cause changes in the broth rheology, affecting oxygen mass transfer and ultimately leading to a decline in production, particularly at shake flask scale, where agitation is reliant solely on shaking of the broth. By simplifying to three media components with distinct purposes in SL synthesis, the effects they have on SL production can easily be identified avoiding confounding from other/more complex media components.
Applying central composite designs to maximize production of SL Through the application of an iterative central composite circumscribed design of experiments, a fermentation media composition was found that was capable of achieving 40 g l -1 SL in 168 h at 50 ml scale using only four media components. This gives confidence that the maximum potential SL production has been found at this shake flask scale (50 ml working volume) with these 3 media components, providing a 'best-case' internal control to compare against when substituting with a feedstock during screening. This is a good level of production from S. bombicola ATCC 222144 when comparing against other works of similar scale (shake flask), quantification method (ethyl acetate/gravimetric) and feedstock type (food grade/high purity). At 50 ml Erlenmeyer scale, Daverey and Pakshirajan (2009) were able to achieve 45 g l -1 in 192 h with a soybean oil and sugarcane molasses media whilst Shah et al. (2017) reached 32 g l -1 in 120 h with palm oil, glucose, yeast extract and urea. With the inclusion of pure oleic acid (the preferred carbon-length fatty acid for SL synthesis), authors see a major increase in productivity, from 55.2 g l -1 (192 h, 50 ml scale) to 95.4 g l -1 (144 h, 10 ml scale) (Kurtzman et al., 2010;Jadhav et al., 2019). Productivity is improved as the fatty acids are supplied directly to the cells, rather than through indirect fatty acid precursors, such as triglycerides, alkanes and fatty acid ethyl/methyl esters. The performance of the fermentation in this work is benefited by the procedural media optimisation and the inclusion of baffled flasks to improve mixing and mass transfer. The oxygenation of the fermentation broth was a major point of consideration for this SL fermentation, as the biphasic (water/oil) media and viscous properties of the product are known to cause a decline in mass transfer and impede productivity (Saerens et al., 2015;Dolman et al., 2017).
Applications of the data set and predictive capabilities
One of the key challenges to market entry for sophorolipids is the high cost of feedstocks, meaning alternative, low-cost sources derived from non-food crops and agricultural by-products are required (Ashby et al., 2013). The sheer quantity of potential feedstocks that can be screened, at a range of concentrations and combinations, is generally prohibitive to test at bioreactor scale. As such, screening must be performed at shake flask scale¸opting to forgo the beneficial process control (pH, temp, aeration and stirring) that increases sophorolipid productivity. With this in mind, it is important to understand the limitations of the shake flask scale model that is being applied, developing an internal control that demonstrates the maximum productivity that can be achieved at that scale. Without an understanding of the minimum and maximum range of productivity of a model, the potential of a feedstock may be masked. Unfortunately, many works within the literature do not apply suitable controls or instead opt to compare against other authors that have different scales/models. The work here distinguishes itself from the literature by having a proven, robust 'best-case' model that can be used to compare feedstocks and provide a between-run control.
The large amount of data produced has also provided a robust predictive model that provides an understanding of the effects of reducing/increasing the three primary media components. Ultimately, this model can be applied to screening; by understanding the composition of a given feedstock in relation to its glucose, nitrogen and fatty acid content, it will be possible to alter the composition of the fermentation media, guided by the predictive model, to maximize production. The model developed is capable of showing statistically significant changes in SL production as components are altered, with values from 11.06 to 40 g l -1 SL, which gives confidence that any detrimental/ beneficial changes to the fermentation process will likely cause a visible change in production.
As shown, the regression model possesses statistical strength (adjusted R 2 = 0.878), with a low level of potential outliers. Those outliers that were identified indicate that further advancements could be made to both the regression model and the quantification. From Design 1 (Fig. 1), it was clear that the gravimetric quantification method had a greater level of variability at lower concentrations of SL at the 5 ml harvest quantity. Whilst larger quantities at harvest can help to resolve this, the application of more accurate quantification methods such as HPLC and LC-MS would reduce variability, improve accuracy and ultimately improve the predictive capabilities of the model (Davila et al., 1993;Ratsep and Shah, 2009).
Standard least squares regression is a powerful tool for approximating models, but can encounter limitations when handling complex non-linear functions over a large range, such as those found in the quadratic-linear interactions of the rapeseed oil and nitrogen. To determine accuracy of the current model, extra shake flask experiments were performed after this series of work using the predicted optimum media composition (Fig. 5) which was only capable of producing an average of 33 g l -1 SL, rather than the 40 g l -1 predicted. Given the breadth of concentrations tested in this work, further investigation should be applied to look at advanced regression models capable of more accurately predicting SL production, exploring potential points within the design space where extra experimental data could improve the accuracy of predictions.
Effect of nitrogen on productivity and growth
As described, nitrogen may be supplied to the fermentation media in order to stimulate the growth of cells, enabling a higher density of cells capable of producing SL. In order to enter SL production, cells must go through a stress state, which is presumed to be caused by the depletion of nitrogen. This entry into the production phase is demonstrated by the distinction in growth after 24 h (Fig. 7C), where lower levels of nitrogen (0.1273 and 0.8 g l -1 CSL) see little increase in cell mass as nitrogen depletes and they begin to enter the production phase of the fermentation. At higher levels (1.47 g l -1 CSL), cell mass increases greatly between 24 and 96 h as there is a greater availability of nitrogen in the media which increases the length of the growth phase, delaying entry to the production phase. This is seen in the quadratic curvature of nitrogen (Fig. 4). Initially, the tested range of nitrogen (10-3 g l -1 CSL) is too high to allow entry into the productive state. As levels reduce (3-0.127 g l -1 ), nitrogen is more likely to become fully depleted, allowing the cells to become productive, with greater production at lower levels of nitrogen. It is important to note, however, that whilst the regression model indicates 0.127 g l -1 CSL as optimum, the actual best tested concentration was 0.8 g l -1 CSL, suggesting some level of nitrogen is required to provide a small amount of growth that bolsters production.
Within the literature, there is conflicting information on the importance of nitrogen for SL production. The application of models/design of experiments, such as those of Minucelli et al. (2017) and Rispoli et al. (2010), indicates that the presence of nitrogen is generally prohibitive to SL production. Conversely, papers with the highest level of productivity (used to formulate the initial media composition for this work) use far greater levels of nitrogen than those found in this work (5 g l -1 CSL, 4 g l -1 ammonium sulphate or 6 g l -1 yeast extract, 5 g l -1 peptone) (Davila et al., 1992;Rau et al., 2001;Dolman et al., 2017). Both Rau et al. (2001) and Davila et al. (1992) are able to show complete consumption of nitrogen within the early stages of the fermentation, suggesting that the improved scale (shake flask to bioreactor) and mode of operation (batch to fed batch) allows for greater consumption of nitrogen. Despite this, the findings of this work still apply; the levels of nitrogen must be optimized to gain a balance between sufficient growth and a lengthy production phase in order to be productive. At larger scale, the exact nitrogen requirements may alter; however, the fundamental balance of growth/production still remains.
Effect of rapeseed oil on productivity and growth
Following model regression, it was shown that rapeseed oil has an important role in sophorolipid production (Fig. 2B). This is in good agreement with the literature, where removal or decline of hydrophobic carbon supply (via triglycerides, ethyl esters or other fatty acid residues) results in a large decline in production (Davila et al., 1992;Rau et al., 1996). Within the ranges tested, there was a clear decline in production as oil levels declined < 50 ml l -1 , indicating that the minimum concentration for production had been found. This is supported by the relation between sophorolipid yield, oil consumption and initial oil concentrations (Fig. 6); as levels decline < 50 ml l -1 , there is insufficient oil to sustain production over the 168 h period and final sophorolipid concentrations decline. Minimum oil requirements have not been extensively studied in batch models within the literature; however, fed-batch models have shown maximum production with consumption of 140 g l -1 rapeseed oil or 184 g l -1 rapeseed ethyl ester as examples (Davila et al., 1992;Rau et al., 2001). As well as improving SL production, increased levels of oil appear to increase biomass (Fig. 7B). Whilst the exact reason for this is unclear, there has been demonstration of increased biomass by association of fatty acids to large vacuoles in S. bombicola that could lead to alteration of the cell size and subsequent optical density (Hommel et al., 1994). As shown in the surface (Fig. 4) and predictive (Fig. 5) plots, SL production rapidly increases from 50 ml l -1 of rapeseed oil until it begins to flatten as it reaches the optimum value, indicating that the levels of oil become sufficient to sustain production. As the oil concentration increases past the optimum value, production begins to decline, indicating that higher levels of oil begin to inhibit the SL production. This has similarly been evidenced in the literature, where an excess of rapeseed oil, over 10 g l -1 in fed-batch models, has been shown to cause a decline in production, presumably through the toxic effect of excess fatty acids and the impact on mass transfer (Rau et al., 1996). The presence of cubic curvature for oil is only exaggerated at higher levels of nitrogen in the surface plots/predictive models (Figs 4 or 6); however, this is more likely an over-exaggeration by the regression model, particularly as combinations of high oil (> 150 ml l -1 ) and high nitrogen (> 5 g l -1 CSL) have not actually been tested, with the model instead assuming an increase in production based on two design points (Design 1 and 2, patterns 0A0).
Interaction between oil and nitrogen
As shown in the regression model, significant interactions were found between the oil and nitrogen (Fig. 2B) and how they effected the subsequent production of SL. The two terms interact both in a linear relation (Oil*Nitrogen) and in relation between the curvature of oil and linear effect of nitrogen (Oil*Oil*Nitrogen). At its simplest, the two terms work to effectively inhibit the effect of the other depending on their initial concentration. Following Fig. 7C, a high initial nitrogen concentration will nullify the effect of altering the amount of oil on SL production, as the cells focus solely on growth and do not utilize rapeseed oil, leading to a 'flattening' of the profile of the oil. Inversely, as oil levels begin to decline, the beneficial effect of reducing nitrogen on SL production is reduced as the oil becomes fully depleted, removing the primary substrate for SL production from the fermentation media. These effects are shown directly in the fermentation broth (Fig. 7C), where greater levels of growth are seen with higher nitrogen (as the growth phase is sustained) and glycerol production is lowered, indicating that triglyceride breakdown (and general consumption of oil) is reduced.
The interaction is also reflected in Fig. 6. At lower levels of oil (< 50 ml l -1 ), the substrate supply for SL production is insufficient and nitrogen exerts no effect on improving SL production. As oil levels increase (> 100 ml l -1 ), the substrate supply becomes sufficient to sustain SL production for the 168 h period (as shown by oil consumption levels being < 100%). At this point, SL production is reliant on reducing the nitrogen concentration, as the fermentation becomes either focussed on production (low nitrogen) or growth (high nitrogen). The findings here highlight how one media component may exert an effect on another and must be carefully considered/modelled. In the case of oil and nitrogen, it is important to understand the core purpose of each component (i.e. substrate supply and biomass development) and consider how they may hinder/boost one another.
Effect of glucose on productivity
Hydrophilic carbon sources, in particular glucose, are highlighted in the literature as being an important component in sophorolipid production, driving glycolysis and glycogenesis to generate UDP-glucose, the precursor to the sophorose moiety of sophorolipids (Hommel et al., 1994). For example, whilst Davila et al. (1997) found that production could occur with the sole presence of rapeseed ethyl esters, the inclusion of glucose during the production phase significantly increased sophorolipid production. When comparing against other DoE work, the importance of glucose is repeated. Rispoli et al. (2010) found that sugar, even in disaccharide form (sucrose, fructose and lactose), was required to gain a productive process. Minucelli et al. (2017) applied a Box Benkhen design not too dissimilar from Design 2 at 25 ml working volume, exploring the optimum concentration of glucose, chicken fat and urea and found an 83% reduction in productivity when glucose concentrations were reduced from 100 to 10 g l -1 . Comparatively, our work was able to produce an average 32.43 g l -1 SL even with only 3.086 g l -1 glucose (Design 2 -a00, Fig. 3).
Following the completion of Design 1, it was theorized that the original concentrations of glucose were all too high to lead to depletion within the broth, meaning a decline in SL production was not possible. However, consumption profiles in Design 2 ( Fig. 7A-C) highlighted that glucose was actually depleted in most flasks by 96 h. With the assumption that the rate of consumption between sampling points (24 and 96 h) was identical, it would be expected that flasks with lower initial glucose would become depleted earlier and have a statistically significant decline in SL production if glucose was a significant term. Whilst there was evidence that increased glucose did improve production (32.43-39.42 g l -1 SL from patterns a00 to A00 of Design 2), presumably through its sustained presence in the broth, the level was not significant enough to be highlighted within the model.
Interaction between glucose and glycerol
The likely explanation for this lack of significance can be found in the switch from glycerol accumulation (a byproduct of triglyceride breakdown for the generation of fatty acids) to consumption at the point of glucose depletion ( Fig. 7A-C). Initially, S. bombicola is unable to coutilize glycerol in the presence of glucose due to carbon catabolite repression, leading to glycerol accumulation within the broth (Gancedo, 1998;Lin et al., 2019). At the point of glucose depletion, the repression is removed and glycerol becomes the new hydrophilic carbon source, sustaining SL production up to the point of harvest. As such, the presence of glycerol effectively masks the potentially deleterious effects of glucose depletion, leading to the lack of significance of glucose found in the model. Further to this, glycerol never fully depletes in the broth (even at the lowest oil concentration, 15.9 ml l -1 , Fig. 7A), meaning there is no point at which a hydrophilic carbon source is not present. In reality, glucose and oil do interact; higher initial concentrations of glucose increase productivity as a more efficient hydrophilic carbon source is available for longer, whilst also generating glycerol. When glucose is eventually depleted, it requires a minimum level of oil to generate sufficient glycerol to continue the hydrophilic carbon source up to the point of harvest. It is likely that with longer fermentation periods or lower initial oil, glycerol would become fully depleted and the significance of glucose would have been highlighted.
The findings here indicate a potential balance that must be struck when selecting efficient hydrophilic carbon sources and glycerol-producing hydrophobic sources. As sources rich in free fatty acids are expensive, it is more likely that oils rich in triglyceride forms of the required fatty acids will be used, leading to the production and accumulation of glycerol in the broth. Glycerol is known to exert an inhibitory effect on both upstream and downstream production of SLs as it effects broth rheology, mixing and separation (Buchholz et al., 1978;Lin et al., 2019). Within this work, the presence of glycerol caused complications in the downstream processing of the fermentation broth; glycerol levels > 5 g l -1 led to cell mass aggregation into distinct miniature 'clumps' that made isolation of the sophorolipid and oil phase difficult during solvent extraction. As a result, dry cell weight measurements were not possible in this work and subsequently led to the development of an ethanol treatment step to improve optical density measurements by removing glycerol from the broth. With this potential inhibitory effect in mind, there may be an advantage to allow for the depletion of an 'efficient' hydrophilic carbon source (i.e. glucose) temporarily to allow for the consumption of glycerol to levels where it does not exert an inhibitory effect. Fed-batch production processes could easily monitor glycerol levels and alter the feeding rate of glucose to allow for glycerol consumption to remove this effect. However, this will be a balance between the potential decline in productivity from swapping to a less efficient hydrophilic carbon (glucose to glycerol) against the loss in productivity caused by glycerol accumulation.
Conclusion
Through the application of a CCCD response surface methodology, a large data set capable of demonstrating the effect of varying three primary media components towards sophorolipid production has been produced. With the application of stepwise and standard least squares regression modelling, it has been possible to produce a statistically significant regression model that can profile these media components and find an optimum composition for SL productivity, with future applications in applying the predictive model in feedstock screening. An iterative design process, in which two of the model terms were augmented to supplement the data set, has been applied making it possible to explore a large design space and gain confidence that the optimal conditions have been found to maximize SL productivity at shake flask scale, giving a meaningful baseline SL productivity to measure feedstock performance against.
By monitoring the conditions within the fermentation broth, the output of the regression model has been linked to the changes seen within the fermentation and expands the understanding of the role of each component within sophorolipid fermentation. The importance of nitrogen was highlighted as high levels cause a sustained growth phase, reducing SL productivity. It will be interesting to see whether the requirements for nitrogen differ as the process is scaled up and fermentation control (pH, feed control and oxygen) improves, as there is mixed evidence in the literature of nitrogen requirements. Rapeseed oil is important as the primary substrate for SL production and was capable of minimizing the effect of glucose depletion within the broth, via the production (and subsequent consumption) of glycerol.
Despite its prominence in the literature, glucose did not have a significant effect on SL production, even though a wide range of concentrations were tested. The utilization of glycerol, generated via rapeseed oil metabolism, reduced any deleterious effect of glucose depletion, highlighting the potential of this by-product to aid SL production. Controlled consumption of glycerol during SL fermentation through fed-batch processes may help to supplement the hydrophilic carbon supply and reduce the presence of an inhibitory component, improving productivity.
Experimental procedures
Design 1 -Initial central composite circumscribed design JMP 15 was used to generate an experimental design for a response surface model capable of studying the effect of glucose, rapeseed oil and corn steep liquor/ ammonium sulphate on SL production, as shown in Table 3A. A central composite circumscribed design was chosen due to its suitability for process optimisation, with the assumption that non-observed/controlled variables (temperature, trace elements and seed concentration) were consistent and unlikely to produce a statistically significant effect. For a CCCD, the media components are altered to different levels at low (-), centre (0) and high (+) values ('factorial' range) initially, with the addition of extremely high ('A') and low ('a') values ('axial' range). To begin, the literature concentrations of glucose, rapeseed oil and corn steep liquor/ammonium sulphate from Rau et al. (2001) and Davila et al. (1992) were chosen as the centrepoints (000) within the design, with the factorial range chosen as +/− 50% of the centrepoint concentration ('+' and '-'). The design was circumscribed with axial values ('a' and 'A') at an alpha value of 1.682, allowing for rotatability and consistent predictive variance from all points around the centrepoint. Corn steep liquor and ammonium sulphate were kept within the same ratio (5 g l -1 :4 g l -1 , respectively) as the nitrogen source development was not the focus of this work. All patterns were run in duplicate to determine variance and model fit. The decision was made to retain all factorial runs in one block and axial runs in another. This way any blocking effect (a change in the variance or values of any given replicate/repeat) would only influence the axial or factorial values respectively. An additional set of three patterns tested in block 1 (+--, ++and -+-) were repeated once in block 2.
Design 2 -Augmentation of model
Following on from Design 1, it was decided that nitrogen and glucose concentrations needed further reduction to determine the effect on SL production. As such, the original design was augmented with a secondary central composite circumscribed design using JMP 15. The media component concentrations of Design 2 are summarized in Table 3B.
The reduction in these components aimed to serve two different purposes. For nitrogen, it was clear that high productivity was found around 0.8 g l -1 CSL, so this was placed as the centrepoint and a range of +/− 50% of that concentration used as the factorial values, with the hope of finding a precise optimum point. For glucose, there was indication that productivity did not decline between 50 ('-' value) and 15.9 g l -1 glucose ('a' value), so the decision was made to explore this range Table 3. Media composition of (A) Design 1 and (B) Design 2 of the central composite model. Each feedstock was tested at five concentrations, marked by symbols representing their position in the factorial (−, 0 and +) and axial (a and A) levels of the design. to greater detail in the hope finding the point of decline, using the 50 g l -1 as the upper factorial ('+') value and 15 g l -1 as the lower (close to the axial value tested in Design 1). As per the original design, axial values were added to obtain a fully rotatable design (alpha = 1.682) and values run in duplicate. The range of oil concentrations was retained from Design 1 as they were found to produce quadratic curvature (tailing at lower and higher concentrations), meaning the profile did not require further exploration. The relation between the two designs is shown graphically in Fig. 8
Microbial culture and maintenance
Starmerella bombicola ATCC 222144 was used in this study. Cryopreserved inoculum sourced from a single S. bombicola colony grown on YPG agar (6 g l -1 yeast extract, 5 g l -1 peptone and 100 g l -1 glucose) was prepared to inoculate seed culture flasks. A single colony was inoculated into media containing 5 g l -1 CSL, 4 g l -1 ammonium sulphate, 100 g l -1 rapeseed oil and 100 g l -1 glucose and incubated to mid-exponential growth (OD 600 = 10, 2.35 × 10 8 cfu ml -1 ) prior to the addition of 10% glycerol and storage at −80°C.
Production flask conditions
All cultures were performed in 250 ml 4-bottom baffled flasks (Thermo Scientific Nalgene; Fisher Scientific, Loughborough, UK) with a 50 ml media fill volume, with the media composition being dictated by the model design. Flasks were incubated in an orbital shaker at 25°C, 200 rpm and 0.75 inch throw. Production flasks were inoculated with 5 ml seed culture transferred at a target OD 600 of 11-15.0 (midexponential growth). Media used for the development of the seed consisted of 5 g l -1 CSL, 4 g l -1 ammonium sulphate, 100 g l -1 rapeseed oil and 100 g l -1 glucose (Rau et al., 2001).
Analytical methods
Samples, 1 ml, were taken periodically in order to quantify the optical density and media composition of the fermentations. To prepare, 1 ml samples were centrifuged at 13 500 rpm for 5 min and supernatant removed for HPLC analysis. The cell pellet was then washed and vortexed with 0.5 ml 70% ethanol in order to remove residual sophorolipid and glycerol, followed by centrifugation at 13 500 rpm for 5 min, removal of the supernatant and resuspension of the pellet in 1 ml (original volume) 0.9% saline. This resuspension was used to quantify biomass via OD 600 measurement using a UVspectrophotometer. Glucose and glycerol quantification was performed with HPLC using an UltiMate ® 3000 system with an Aminex HPX-87P column and isocratic elution with 0.6 ml min -1 5 mM H2SO4, with a 5 µL injection volume and column temperature of 50°C. Samples were analysed with a RefractoMax 520 refractive index detector (Thermo Fisher Scientific, UK) and compared with a calibration curve of glucose and glycerol. Glycerol was selected for quantification as it is a by-product of the breakdown of triglycerides in the rapeseed oil required to generate the fatty acids for SL production, allowing for offline estimation of oil consumption in the fermentation broth.
Liquid:liquid extraction
In order to gain a confluent sample for extraction and minimize phase separation within the broth, whole fermentation broths were transferred at harvest to 50 ml Falcon tubes and vortexed well (30 s, 2000 rpm), followed by immediate sampling of 5 ml whilst the broth was still agitated/in motion. Samples were then heated at 60°C for 15 min in a water bath in order to dissolve lactonic sophorolipid crystals.
Hexane extraction was used for rapeseed oil removal and quantification, followed by sophorolipid recovery and quantification using ethyl acetate. Equimolar quantities of hexane were added and vortexed at 2000 rpm for 15 s and the hexane phase removed onto a preweighed aluminium tray 2 times, with the final extraction samples centrifuged at 5000 rpm for 10 min to ensure full removal of the hexane phase. To remove SL product, three separate ethyl acetate extractions were performed and dispensed on pre-weighed aluminium trays. Both solvent extractions were allowed to dry overnight at ambient temperature on their pre-weighed aluminium tray. These trays were then weighed and the final value calculated in relation to the final volume of broth. Quantities of rapeseed oil were converted from weight (g l -1 ) to volume (ml l -1 ) using the relative density of rapeseed oil (0.915) where required.?>
Model regression
Following quantification of the sophorolipid for the tested fermentation flasks, regression analysis was performed to compare the effects of the hydrophilic carbon (glucose), hydrophobic carbon (rapeseed oil) and nitrogen (CSL/ammonium sulphate) factors on the final quantities of sophorolipids at harvest (168 h). Model effects were chosen to include single-factor (X1), two-factor (X1*X2) interactions and quadratic curvature (X1*X1). These were later supplemented in Design 2 with quadraticlinear interactions (X1*X1*X2) and cubic curvature (X1*X1*X1).
Stepwise linear regression was performed on the JMP 15 platform to select a model with a combination of factors with the lowest Bayesian Information Criterion (BIC) score, with the inclusion of the blocking term regardless of significance. In cases where higher-order terms (i.e. two-factor, three-factor and quadratic) demonstrated significance and lower terms (one-factor) did not, the lowerorder terms were still retained in the regression. Following this, standard least squares regression was performed on the selected factors and analyses performed as detailed in this work. Fig. S1. Prediction expression for SL production with glucose, rapeseed oil and nitrogen (in relation to cornsteep liquor, with a ratio of 1:0.8g/L cornsteep liquor:ammonium sulfate) for the combined data set from Design 1. Model was sequentially developed from stepwise regression (for model effect combination with the lowest Bayesian Information Criterion value) and standard least squares regression. Fig. S2. Predicted by actual plot of Design 1 values with the initial Design 1 regression model. Points labelled by pattern are those that are close to/exceed 95% individual t distribution limits as dictated by externally studentized residuals. Fig. S3. Externally studentized residual plot of results from Design 1 with the initial regression model. Outer limits (red) are 95% Bonfonerri limits, and inner limits (green) are 95% individual t limits. Values close to/exceeding the inner limits are labelled by pattern. Fig. S4. Prediction expression for SL production with glucose, rapeseed oil and nitrogen (in relation to cornsteep liquor, with a ratio of 1:0.8 cornsteep liquor:ammonium sulfate) for the combined data set from Design 1 and Design 2. Model was sequentially developed from stepwise regression (for model effect combination with the lowest Bayesian Information Criterion value) and standard least squares regression. Fig. S5. Predicted by actual plot of Design 1 (red) and 2 (blue) of the CCD from JMP 15 with the combined regression model. Points labelled by pattern are those that are close to/exceed 95% individual t distribution limits as dictated by externally studentized residuals. Fig. S6. Externally studentized residual plot of results from Design 1 (red) and Design 2 (blue) with the combined regression model. Outer limits (red) are 95% Bonfonerri limits, and inner limits (green) are 95% individual t limits. Values close to/exceeding the inner limits are labelled by pattern. Table S1. Complete data set of executed fermentation flasks under Design 1 and 2 of the central composite circumscribed design. Actual sophorolipid quantities are supplied alongside the predicted values from the regression models of Design 1 and Design 1/2. Nitrogen values refer to the quantity of cornsteep liquor, with a ratio of 1:0.8g/L cornsteep liquor:ammonium sulfate. Table S2. Analysis of variance of the combined regression model and model terms. | 2022-01-19T06:23:38.756Z | 2022-01-17T00:00:00.000 | {
"year": 2022,
"sha1": "0c568d9ee2d958ed8b5cef709275011e682f47f9",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "3ae84f2b28a159efebdf0474ab62113587f32d2d",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
209420556 | pes2o/s2orc | v3-fos-license | Empathy and compassion toward other species decrease with evolutionary divergence time
Currently the planet is inhabited by several millions of extremely diversified species. Not all of them arouse emotions of the same nature or intensity in humans. Little is known about the extent of our affective responses toward them and the factors that may explain these differences. Our online survey involved 3500 raters who had to make choices depending on specific questions designed to either assess their empathic perceptions or their compassionate reactions toward an extended photographic sampling of organisms. Results show a strong negative correlation between empathy scores and the divergence time separating them from us. However, beyond a certain time of divergence, our empathic perceptions stabilize at a minimum level. Compassion scores, although based on less spontaneous choices, remain strongly correlated to empathy scores and time of divergence. The mosaic of features characterizing humans has been acquired gradually over the course of the evolution, and the phylogenetically closer a species is to us, the more it shares common traits with us. Our results could be explained by the fact that many of these traits may arouse sensory biases. These anthropomorphic signals could be able to mobilize cognitive circuitry and to trigger prosocial behaviors usually at work in human relationships.
unanswered about our ability to connect emotionally with other organisms. Does it apply to all living beings or is it limited to a particular perimeter? To what extent does phylogenetic proximity explain our ability to understand their emotions and to express sympathy towards them? Does it decrease linearly with the time of phylogenetic divergence separating them from us, or stepwise, depending on particular level of organization, i.e. corresponding to evolutionary grades? What is the nature of the stimuli at the origin of these perceptions and how can they arouse in us emotions comparable to those usually expressed within human relationships? And in a broader extent, how can we explain in the frame of the natural selection paradigm, the existence of altruistic behaviors between different species?
In order to fill some of these gaps, the present investigatory project was designed to provide the first cartography of the living world through human empathy-related responsiveness it may arouse, and to interpret its variations in a phylogenetic comparative framework. Our online survey involved 3500 raters who had to make preference choices over an extended photographic sampling of organisms, designed to be as representative as possible of the phylogenetic diversity of life (microscopic organisms excluded). Choices were driven by two different questions. Indeed, as there are many different definitions -and a nebulous usage -of the term empathy, and a wide array of mental states and notions related to this concept (ex. sympathy, cognitive or affective empathy, compassion, self-other distinction, affect sharing or emotional contagion [13][14][15][16], two different questions were formulated in order to distinguish empathic-like perceptions from compassionate-like responses. The notion of empathy is presently referring to the capability to connect with one another at an emotional level 14,17 . The driven question proposed to the raters to assess their empathic preferences was "I feel like I'm better able to understand the feelings or the emotions of [choice among a pair of pictures representing different organisms]". In contrast, the notion of compassion (also termed empathic concern) has been used here to refer to the feeling of concern for the suffering of others, associated with a motivation to help 13,18,19 . The corresponding question proposed to raters was "If these two individuals were in danger of death, I will spare the life of [choice among a pair of pictures] as a priority" (Fig. 1).
Results
For the question related to empathy, the probability to be chosen decreased with the phylogenetic distance relatively to humans, compared to the alternative species (Fig. 2, SI Appendix, Tables S1 to S4, Figure S1).
For each relative reduction of phylogenetic distance of one million year, the probability to be chosen increased by 2.54 (SE = 0.19) in linear units (logit). Results varied according to the raters' sex (P = 0.02), age (P < 0.001), knowledge on biodiversity (P < 0.001), opinion on hunting and fishing (P = 0.01), and opinion on the value of animal life relatively to humans (P < 0.001). Direction of effects are indicated in Table S3, and depicted in Fig. 3A. The empathy score, computed for each species, varied between 0.12 to 0.91, and decreased quadratically with divergence time (linear slope: −1.2 10 −3 , F 1,49 = 258, P < 10 −16 ; quadratic term: 5.3 10 −7 , F 1,49 = 99.8, P < 10 −13 ). From divergence time higher than 611.1 Mya, the empathy score was no longer decreasing with divergence time (estimated inflexion point, with a 95% confidence interval running from 518 to 703 Mya).
For the question related to compassion, the probability to be chosen decreased with the phylogenetic distance relatively to humans, compared to the alternative species (Fig. 2, SI Appendix, Tables S1 to S4, Figures S1). For each relative reduction of phylogenetic distance of one million year, the probability to be chosen increased by 0.63 (SE = 0.13) in linear units (logit). Results varied according to the raters' age (P < 0.001), diet (P < 0.001), knowledge on biodiversity (P = 0.01), opinion on hunting and fishing (P = 0.001), opinion on the value of animal life relatively to humans (P < 0.001), and number of pets (P = 0.016). Direction of effects are indicated in Table S3, and depicted in Fig. 3A. The compassion score, computed for each species, varied between 0.08 to 0.79, and decreased quadratically with divergence time (linear slope: −7.8 10 −4 , F 1,49 = 76.6, P < 10 −11 ; quadratic term: 3.9 10 −7 , F 1,49 = 39.0, P < 10 −8 ). From divergence time higher than 564.9 Mya, the compassion score was no more decreasing with divergence time (estimated inflexion point, with a 95% confidence interval running from 413 to 797 Mya).
The empathy and compassion scores were correlated (Pearson's product-moment correlation = 0.868, t = 12.4, df = 50, P < 10 −15 ). The decrease in score with divergence time was faster for empathic scores than for compassion If these two individuals were in danger of death, I will spare the life of (...) as a priority.
x 22
I feel like I'm better able to understand the feelings or the emotions of (...) Empathic preference Compassionate preference choice Figure 1. Experimental procedure. Based on a focused question, each evaluator had 22 pairs of pictures to evaluate (randomly drawn from a total of 52 species). The question, also randomly drawn at the beginning of the test, was intended to assess either empathic or compassionate preferences. (photos by A. Miralles).
The mean response time of raters decreases significantly with the absolute time of divergence between two organisms, regardless of the question asked (empathy or compassion driven) (Fig. 3B). It decreases by 0.168 s (SE = 0.012) for each increase of divergence time of 100 Myr. When the two species in the pair were equally divergent (absolute time of divergence = 0), mean response time was 7.96 s (SE = 0.15) and 6.54 (SE = 0.15) to empathy and compassion driven questions, respectively. This difference in response time between the two questions, 1.43 s (SE = 0.16), was independent of the absolute divergence time (interaction between absolute divergence time and the type of question: X 2 = 0.003, df = 1, P = 0.96) (SI Appendix, Table S4).
Figure 2.
Empathy and compassion scores attributed to each organisms as a function of divergence time (Mya) between them and humans. The scores correspond to the probability that a given species is chosen from a pair of species that includes it and another randomly selected (n = 52 species). See SI Appendix, Results S1 for details. Odds ratio (for a qualitative variable: ratio of the odds of choosing the most phylogenetically related species in the depicted factor level to the odds of it occurring in the reference factor level; for age, centered variable: ratio of the odds of choosing the most phylogenetically related species in age 1 to the odds of it occurring in age 0) are represented by dots and 95% confidence interval by lines; blue or red dots indicate variables linked with an increased or decreased, respectively, choice probability for the most phylogenetically related species (n raters = 1134 for the empathy test and 1213 for the compassion test). (B) Predicted participants' response time as a function of the absolute divergence time between the two species presented in each pair (area depicts the 95% confidence interval, n responses = 25001 for the empathy test and 26781 for the compassion test). ( www.nature.com/scientificreports www.nature.com/scientificreports/
Discussion
Empathy, resemblance and relatedness: the anthropomorphic stimuli hypothesis. The ability to understand others' feelings through empathy is crucial for successful social interactions between humans 19,20 . Our predispositions for empathy are partly determined by our genes 21 and, in all likelihood, this prosociality driver has been selected during the evolution of our species, in facilitating coordination and cooperation between individuals 1,13,22 . The extension of our empathic sensitivity toward other living beings remains nevertheless an issue poorly explored from an evolutionary perspective.
Our results show that our ability to empathize considerably fluctuates from one species to another, and that its magnitude mostly depends on the phylogenetic distance that separates them from us. Although relatedness and resemblance (sensu overall similarity) refer to different concepts, they empirically tend to be correlated. In an anthropocentric frame of reference, it can therefore be postulated that relatedness (here expressed as the divergence time) correspond to a rough holistic approximation of the total amount of shared external traits inherited from our common ancestor (synapomorphies), as retrospectively, they are expected to decrease relatively gradually over a long period of divergence.
Based on our results, we here hypothesize that our ability, real or supposed, to connect emotionally with other organisms would mostly depend on the quantity of external features that can intuitively be perceived as homologous to those of humans. The closer a species is to us phylogenetically, the more we would perceive such signals (and treat them as anthropomorphic stimuli), and the more inclined we would be to adopt a human to human-like empathic attitude toward it. Intuitively, the correlation could have been expected but actually the assumption was not so obvious as it seemed. Indeed, in the phylogenetic thinking, overall similarity (the external features we do perceive) is not phylogenetic relatedness (ex. the coelacanth is perceived more similar to the trout than to us, whereas it is more closely related to us than to the trout). It is interesting to note that, in spite of this difficulty, overall external similarity as it generates an anthropomorphic stimuli, is still globally correlated to phylogenetic relatedness.
Consistently with the anthropomorphic stimuli hypothesis, the overall linear correlation between empathic perceptions and phylogenetic divergence time suggests differences of degree, and not differences in kinds, in the perceptions we have of the different organisms. Indeed, our data do not show any break in our empathic perceptions that would explain the customary ethical stances opposing the intrinsic values of humans versus other organisms (ex. Abrahamic religions, humanism), tetrapods vs "fishes" (ex. pesco-vegetarianism), animals vs plants (ex. antispecism, veganism) or vertebrates vs non-vertebrates (ex. various system of regulations promoting animal welfare). In such representations, values manage relationships between us and other species in terms of oppositions, while our senses perceive a gradient of shared features between us and other species. Overall, these results suggest that raters recorded what is shared in the realm of perceptions, rather than mobilizing oppositions in the realm of ethical values. Likewise, we noticed that despite the fact that some rater's traits (such as opinions on the value of an animal's life comparatively to those of a human) can have an effect on empathy scores, their values remain overall strongly correlated with the time of divergence.
Interestingly, the retrospective inflexion (estimated at 611.1 Mya, 95% CI: 518-703 Mya) and the stagnation of the empathic perceptions curve coincides with the transition from gnathostomes (jawed vertebrates) to non-gnathostomes (lampreys and all the others clades whose divergence from us is equal or superior to 615 Mya). Nevertheless, such an estimate is unprecise and should be considered with caution. The stagnation of our perceptions might also correspond to the prebilaterian organisms (in our dataset, all the sampled clades that have www.nature.com/scientificreports www.nature.com/scientificreports/ diverged from our lineage 824 Mya or earlier). Indeed, bilaterians, to which we belong, are characterized by a bilateral symmetry, with a ventrodorsal and an anteroposterior axis. Most often, they are mobile and have a head (concentration of the mouth, sense organs, and nerve ganglia at the front end). In contrast, clades having diverged from our lineage prior to bilaterians (cnidarians, fungi and plants in the present study) are lacking all these external traits and are most often sessiles. The plesiomorphic anatomical organization of these neither heads nor tails organisms can be destabilizing from a perceptual point of view: It is almost impossible to spontaneously establish structural or behavioral homologies connecting them to us, likely reducing our empathic projection ability to its minimum. Accordingly, several bilaterian organisms having secondarily lost externally visible bilateral symmetry (echinoderms) or undergone spectacular changes of their anatomical organization (tunicates and bivalves) present minimal empathic scores among bilaterians (their empathic scores are actually equivalent to those attributed to non-bilaterian organisms, what may have contributed to the shift of the inflection point of the curve toward a more recent time). Among the macroscopic organisms present in our sampling, such a low level of empathy is interpreted as the most basic anthropomorphic signal, and may correspond with the recognition of an entity as a living being. Overall these results suggest that humans are relatively indifferent to organisms that do not show obvious signs of antero-posterior and dorso-ventral differentiations.
Shift between empathy and compassion. The extension of altruistic intentions (eg. sympathic or compassionate behaviours) to other organisms remains enigmatic from an evolutionary perspective, especially if we consider the latter as potential competitors, predators or as a valuable food resource for our species 23 .
Our data shows that empathy and compassion scores are significantly correlated to each other, and that both decrease with divergence time. These results were relatively expected as empathy is known to promote compassionate responses, although the neuronal networks recruited by each of these mental states have been shown to be distinct 19 .
Nevertheless, the trends obtained in these two analyses differ in several ways ( Fig. 4): (i) the correlation with divergence time is less pronounced for compassion scores than for empathy scores, and the decrease in scores with divergence time is slower for compassion than for empathy; (ii) the retrospective inflexion and the stagnation of the compassion scores curve seems to occur more recently than for the empathy scores (564.9 Mya for compassion scores versus 611.1 Mya for empathic scores, SI Appendix, Figs. S1 and S2); (iii) recorded response times are significantly higher for the compassion test, suggesting here a greater hesitation from the raters, but the differences in response time for each type of question is remarkably steady and independent from the phylogenetic distance between two species (Fig. 3B); (iv) some features of the evaluators (e.g., diet) have a confounding effect on the probability of choosing the closest phylogenetically related species that is more pronounced for compassion scores than for empathy scores (Fig. 3A, SI Appendix, Tables S2 and S3); and finally, v) for few taxa only, the decisions made by the raters in the compassion questionnaire are strikingly dissociated from the empathic perceptions they felt (Fig. 4). Indeed, although empathic scores attributed to tick and oak tree are relatively well corresponding to those obtained for the others protostomians and plants, respectively, their compassion scores are notably disconnected from those attributed to their relatives (strikingly lower for the tick and higher for the oak). The compassion score given to the tick is actually so low (well below the plateau formed by all the other low compassion score species) that it could be tempting to consider this result as a sharp expression of antipathy rather than as a mere lack of compassion. The strong aversion to parasitic species is not surprising given the threat they represent, and might explain the observed dissociation between empathic perceptions and compassionate responses. However, this trivial interpretation is counterbalanced by the fact that another potentially threatening species, the great white shark, reached compassion score relatively high in comparison with both its empathic score and phylogenetic distance from humans. The high compassion score for the oak tree also represents an outlier difficult to interpret. The imposing size of trees, their slow growth and long lifespan, their upright shape vaguely reminiscent of a human silhouette or their symbolic weight (which might itself results from the biological properties previously mentioned) are among the possible factors explaining the strong affective bond with trees, despite the obvious difficulties of being in empathy with a plant. Interestingly, the oak and the white shark have in common to be large sizes organisms, a trait that has been shown to positively influence our taxonomic preferences within vertebrates 8,9 . Overall, these results led us to consider that compassionate responses, although strongly structured by intuitive empathic perceptions, nevertheless tend to be modulated by the personal ethical inclination toward non-human organisms and by the knowledge we have acquired about each species. Therefore, the compassion score as developed in our study is likely not a strict measure of the intensity of our spontaneous compassionate impulse. Whereas the empathic questionnaire is morally and affectively neutral (impression to better understand the emotions of one of the species presented in each pair), the compassion questionnaire was designed to involve emotionally the raters as much as possible. It is dilemmatic and virtually engaging their responsibility, since choosing to save one individual of the pair indirectly implies the sacrifice of the remaining one. At the end of the test, several raters have even spontaneously informed us about the discomfort perceived during certain choices they had to make. For these reasons, it would likely be more accurate to consider the compassion score as a complex expression of spontaneous emotional responses (the death of which of these two individuals would affect me the most?) mitigated by ethical considerations (which one deserves the most to survive?). Nevertheless, despite the probable intervention of reason in this rebalancing, it is remarkable to note that compassion scores remain closely linked to our spontaneous empathic perceptions and our phylogenetic proximity with a given organism.
Sympathy beyond the confines of man. Phylogenetic distance separating us from a given organism is a key parameter to explain our predisposition to connect emotionally with the different life forms. This finding supports the hypothesis of a significant biological component at the origin of our taxonomic preferences, although ( www.nature.com/scientificreports www.nature.com/scientificreports/ additional studies involving non-occidental raters (ex. hunter-gatherer or pastoral societies) would be necessary to ensure this trend can be generalized to the whole humankind. The fluctuations of our affective preferences are likely corresponding to the amount of traits shared with humans, gradually acquired over the long term evolution of the lineage leading to us, and that are involved in the intraspecific recognition of our fellow human beings. To some extent, such anthropomorphic stimuli induced by other organisms could therefore mobilize a cognitive circuitry that is usually at work in human relationships. The emotional reactions and prosocial behaviors they may promote would therefore be all the stronger as the species is close to us, as it shares with us more of these traits.
This phenomenon evokes similarities with the interspecific behavioral diversions episodically reported in other vertebrates providing parental care. Incidental cases of interspecific adoption -most often between relatively closely related species -are well documented in mammals and birds 24,25 . Some birds, such as the cuckoos (Cucculus sp.) have even turned these behavioral flaws at their advantage, through a successful brood parasitic way of life, in forcing the adoption of their offspring by parents from another species 26 . However, it may be reductive to consider the derivation of human prosocial traits from the sole perspective of a selective disadvantage. Our interactions with other organisms are highly diversified and little is known about the real impact our emotions toward them may have had on the human evolution. Our empathic skills may have for instance offered to early hominids the advantage to better anticipate the reactions of wild mammals, either to facilitate their hunt or to assess instantaneously and individually their mood and the danger they may represent. Likewise, our compassionate impulses may have pushed our ancestors to rescue injured or hungry animals, or to adopt young orphaned animals. To what extent could such altruistic interactions between humans and animals have preceded and contributed to the emergence and the long-run development of the multiple domestication episodes remains unknown. What do we know, for instance, about the cognitive predispositions and the motivations that may have allowed humans to make the dog -proverbially presented as our best friend -the very first of the domesticated species?
Methods
Ethics statement. All experiments were performed in accordance with relevant guidelines and regulations.
The French National Commission on Informatics and Liberty approved protocols for this study (CNIL number . All participants were informed of the subject of the study (perception of biological alterity) and the protocol for processing personal data on the first page of the website, and access to experiments was conditional on an explicit informed consent to participate. Data were collected and analyzed anonymously. (1) To optimize the representativeness in terms of phylogenetic diversity, which translates here into the representativeness in terms of temporal divergence from humans, given the hypothesis to be tested: In that respect, most of the clades connecting at different level of the tree of life and that are placed as sister clades of the lineage leading to humans are represented 27,28 . Nevertheless, microscopic organisms have been excluded despite the fact they make up a considerable part of the biodiversity, because we considered them to be beyond our common sensory reach. In total, and excluding H. sapiens, our sampling represent 24 clades that diverged from the lineage leading to man at different times, from our sister clade (chimpanzee, 6.7 Mya) to the very distantly related plants clade (1496 Mya) 27 (Fig. 2., SI Appendix, Table S1). (2) To optimize the representativeness of the phenotypic and phylogenetic diversity among each of these clades: Most of them are represented by several species that have been selected to be highly divergent from each other (i.e. intra-clade divergence time values are always ≥45 million years) 27 . Species selected for a given clade can therefore be considered as different taxonomic samples (i.e. replicates) in order to measure the variability of our empathic reactions for a given divergence time value. Eleven poorly diversified clades (most often very closely related to humans) are represented by a single species (ex. Panina, Gorillini, Ponginae) whereas up to eight highly divergent species have been selected in order to take into account polymorphism of hyperdiversified lineages such as protostomians. For this particular clade, we have for instance selected three very divergent mollusks (a snail, a cuttlefish and a scallop), one annelid (an earthworm) and four very different arthropods (a beetle, a shrimp, a spider and a tick). Given that domestic species have been transformed by human selection, they have been excluded from the sampling because it is likely that their evolution have been directionally driven by our empathic or aesthetical preferences. As far as possible, species overrepresented in the media and entertainment (e. g. bottlenose dolphin) have been avoided or replaced by closely related species that are less popular (e.g. beluga whale) (SI Appendix, Table S1). (3) To take into account the variability among each species: For each species, four distinct photographs of distinct living individuals have been selected from online open sources in order to represent phenotypic variation of living individuals and minimize the enhancement bias specific to each shot (N total = 208 photos). Only photographs representing adult individuals were selected, as it has been shown that in mammals juvenile traits can positively influence our empathic perceptions 11,12 . For humans, two women and two men representing four distinct ethnic phenotypes have been selected.
Procedure. An online application was generated to present random pairs of photographs of different species. No information on the photographed individuals was given to the participants, who could therefore only base their choice on images. For each pair, the rater was instructed to click on the photograph corresponding to the answer to one of the two specific questions randomly chosen for each rater (Fig. 1). The position of the photograph on the screen (left or right) was randomly ascribed for each pair and for each rater. Each rater had 22 distinct pairs of photographs to assess, randomly drawn from the set of 52 species, with the constraint that for each pair, the two species were drawn from distinct clades, and that no species is seen more than once. Three pairs, randomly chosen from among those previously viewed (excluding the last four pairs already seen), were presented again at the end to estimate judgment reliability.
Raters.
A total of 3509 raters participated, between November and December 2018. For each rater, the following general information was collected: sex, year and month of birth, and nationality. In addition, each rater provided information on his knowledge on biodiversity (poor, average, good, advanced), type of diet (omnivorous, vegetarian (fish allowed), strictly vegetarian, vegan), type of pet owned (yes or no for 5 categories: mammal, fish, reptile, bird, arthropod), opinion on hunting and fishing (practicing or supporting, against, indifferent), and opinion on the value of animal life (none, low, some but lower than human's, equal to human's, higher than human's), see SI Appendix, Methods for further details and the french original version. The following conservative selection on raters was applied. First, to reduce cultural heterogeneity, only raters from an European nationality were considered. Second, unreliable raters (i.e., with more than one incorrect answer during the test of judgment reliability), non-adult raters (lower than 18 years old), or raters with incomplete data were removed. Finally, evaluation of pair of photographs taking less than 200 ms or more than 7 min were discarded. Statistics. The aim was to examine the influence of the phylogenetic divergence time relatively to the human species on answers to empathy driven or compassion driven questions. Distinct analyses were performed for each question. Logistic regressions were used to analyze raters' decisions. The binary response variable corresponded to being chosen or not for the focal species (arbitrarily the species presented at the left position) during the presentation of each pair. Species and raters were considered random samples from a larger population of interest and were thus random-effect variables. Therefore, generalized linear mixed models with a binomial error structure were used. For each choice made by a rater, the difference between the phylogenetic divergence time with humans of the focal and the non-focal species was calculated, as provided by timetree.org 27 . The value of this difference was integrated into the model as the main variable of interest (Test). To control for potential confounding effects, variables concerning the raters' characteristics were also included in the model (after pooling some categories poorly represented) as interaction terms with the variable of interest. These confounding variables were the rater's sex (qualitative: male, female), age (quantitative, centered), knowledge on biodiversity (qualitative: minimum, average, fair, good), type of diet (qualitative: omnivorous, pesco-vegetarian, vegetarian), number of types of pet owned (quantitative, centered), opinion on hunting and fishing (qualitative: supporting, indifferent, against), and opinion on the value of animal life relatively to humans (qualitative: lower, equal, higher). The significance of each independent variable was calculated by removing it from the full model and comparing the resulting variation in deviance using a Chi square test. For each question, a score was computed for each species, computed as the number of time the species was chosen, divided by the number of time it was presented to raters. These computations were done using the lme4 package 29 on R 3.5.1 software (R Core Team 2018). The inflexion point (IP) of each Time-Scores curves was defined as the time when the slope of the fitted line changed. It was estimated by fitting a broken line, corresponding to four parameters: IP, slopes of the line before and after IP, and coordinate at t = 0. These parameters were estimated by minimizing the sum of the squares of the residuals, using a genetic optimization algorithm as implemented in the R package rgenoud, version 5.8-3.0 30 . Confidence interval for IP was calculated by bootstrap, using at least 5000 resamples with replacement. | 2019-12-20T15:48:53.503Z | 2019-12-01T00:00:00.000 | {
"year": 2019,
"sha1": "8af23536022517c27ced5f763b5e9d8a55658157",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-019-56006-9.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8af23536022517c27ced5f763b5e9d8a55658157",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
231861871 | pes2o/s2orc | v3-fos-license | Interactive Network Visualization of Opioid Crisis Related Data- Policy, Pharmaceutical, Training, and More
Responding to the U.S. opioid crisis requires a holistic approach supported by evidence from linking and analyzing multiple data sources. This paper discusses how 20 available resources can be combined to answer pressing public health questions related to the crisis. It presents a network view based on U.S. geographical units and other standard concepts, crosswalked to communicate the coverage and interlinkage of these resources. These opioid-related datasets can be grouped by four themes: (1) drug prescriptions, (2) opioid related harms, (3) opioid treatment workforce, jobs, and training, and (4) drug policy. An interactive network visualization was created and is freely available online; it lets users explore key metadata, relevant scholarly works, and data interlinkages in support of informed decision making through data analysis.
Introduction
The U.S. opioid epidemic is a major national concern, with the number of fatal drug overdoses accelerating during the COVID-19 pandemic. As of May 2020, the 12-month counts of reported deaths from drug overdose have increased by an estimated 17% compared with the year 2019-rising from 67,281 to 79,251 deaths [1]. Furthermore, according to a recent study of spatial and temporal overdose spikes by the Overdose Detection Mapping Application Program, the number of reported overdoses has increased by 18% between pre-(Jan 1 through March 18, 2020) and post-stay-at-home order (March 19 through May 19, 2020), while the number of counties reporting fatalities has increased [2].
To address the current opioid crisis, the Department of Health and Human Services (HHS) strategic priorities include improvements in (1) pain management, (2) prevention, treatment and recovery, (3) data and research related to the opioid crisis, and (4) overdose-reversing drugs [3]. A holistic understanding of multiple datasets of drug policy, pharmacy claims, treatment workforce, and opioid-related harms can advance research related to the opioid crisis. We focus our discussion on data resources that are available without major hurdles to access 1 . These data often include aggregate-level identifiers, such as geographical units (state, county), drug names, and occupation codes. Using these aggregate-level identifiers can serve as linkages between datasets, and these linkages may allow researchers and stakeholders to identify new areas for public or health interventions and provide evidence-based guidelines for practitioners and patients. A systematic view of datasets suggests that data linkages become an "informational asset" transforming the way we observe and analyze data [4].
Stakeholders and decision makers, however, are often challenged by the large number, complexity, and peculiarities of the existing datasets. Researchers may also not be aware of available resources as these datasets are provided by different sources and have varying data quality and coverage. Some datasets are freely available while others require signing of legal documents or payment of fees for additional aspects of the data. Furthermore, some datasets are massive in size requiring database expertise to run queries; other datasets exist only as textual data in a PDF format and require file parsing before usage. This review 1 We acknowledge that there are many valuable data resources. however they are harder to incorporate into research, due to privacy protection or cost concerns. seeks to identify commonly used, freely available datasets and describe their coverage and linkages at the aggregate levels. This paper contributes to the growing body of work on linking data sources [4,5,6,7] by introducing a novel visualization of linked data that communicates their temporal, geospatial, topical coverage, and highlights the interlinkages between them. Researchers and practitioners can use this visualization to identify datasets relevant for research, teaching, and policy decisions.
Background
The causes, consequences, and manifestations of the U.S. opioid crisis have been studied from many different angles, including prevention, treatment, drug prescription, law enforcement, criminal justice, and overdose reversal. Treatment expansions and prescription reductions are two essential steps in reducing mortality and improving safety for patients with chronic pain. Monitoring and regulatory policies play an equally important role in balancing between harms, cost, availability, and benefits of opioid use, as seen in policies such as prescription drug monitoring programs (PDMPs), health insurance expansions, and comprehensive federal legislation (e.g., the Comprehensive Addiction and Care Act) [8,9]. These efforts have led to a decrease in the overall U.S. drug prescription rate, which has fallen from 81.3 per 100 people in 2012 to 46.7 in 2019 [10]. But while the U.S. has had success in implementing preventative measures, it has struggled with improving treatment access for those suffering from addiction. A major gap remains between service demand and supply: only Recently, several studies were published that review the current literature and secondary data relevant to the opioid addiction crisis [12,6,13]. Maclean et al. [13] collected and reviewed economic studies and identified several topics relevant for understanding the opioid crisis: (1) pharmaceutical industries and drug prescriptions, (2) healthcare providers and labor market, (3) harms and crime, (4) policies. Another study, [12] extracted intervention variables (e.g., prevention, treatment, harm reductions) and enabling variables (e.g., surveillance, stigma). Furthermore, Smart et al. [5,6] reviewed existing datasets, grouping them according to the HHS strategic priorities: (1) better pain management, (2) addiction prevention, treatment and recovery service, and (3) better targeting of overdose-reversing drugs. In addition, authors classified data based on type, namely national surveys, electronic health records (EHR), claims data, mortality records, prescription monitoring data, contextual and policy data, and others (national, state, local). Strengths and weaknesses of each dataset were assessed using various metrics (e.g., data accessibility, data linkage, coverage).
Data descriptions are often presented in a tabular format with new attributes rendered as columns. For instance, in [12], each variable is provided with its relative frequency of occurrence in the reviewed literature, whereas in [6], a plus/minus sign is used to indicate strengths and weaknesses for each dataset.
A different perspective, called "probabilistic linkage," was developed by Weber et al. [4] in 2014 focusing on a visual representation of potential biomedical sources and the values of their linkages. The team used a tabular form with sizes, shapes, colors, and positions to indicate data quality, data linkage, types of data (e.g., pharma, claims, EHR, non-clinical data), data coverage, and even the probabilities for obtaining new data or linking existing data.
Over the last several years, many new datasets became available (e.g., data.gov and nlm.nih.gov), and researchers now have access to datasets with diverse quality and coverage. In order to federate and use these resources, detailed knowledge about the datasets is required. Understanding data linkages [14] becomes critical for understanding, communicating, and reducing disease [15]. Data visualizations can be used to communicate the complexity of heterogeneous data.
For example, SPOKE [16] and Springer Nature SciGraph [17] use a knowledge graph (KG) to interlink and query different datasets. The SPOKE KG interlinks more than 30 publicly available biomedical databases, whereas SciGraph interlinks funders, projects, publications, citations, and scholarly metadata in support of data exploration. [13]. Building on this work, we applied a modified protocol of scoping reviews [18] to identify open datasets used in the 120 studies cited (see Figure 1). Specifically, the 120 articles ranging from 1986 to 2020 were imported to the Mendeley library group, and duplicate records were removed. Each article was scanned for datasets mentioned in the methodology section and articles without datasets were discarded. The remaining set (107 articles) was tagged in Mendeley with dataset names as they were used in the studies. We identified 283 unique name tags. Across the 107 studies, there were many inconsistencies 5 in naming and spelling, for instance, 'nvss,' 'nvss multiple cause of death,' and 'nvss multiple cause-of-death mortality' all referred to U.S mortality data from death certificates, produced by the National Center for Health Statistics. We normalized labels using OpenRefine and the Nearest Neighbor algorithm with Prediction by Partial Matching (PPM) distance [19]. The algorithm detected 61 clusters that were merged, resulting in 230 normalized labels. We manually inspected all labels and removed datasets that did not fit our eligibility criteria:
Data Collection
(1) dataset must be publicly available, and (2) dataset should fall into one of the following categories: i) pharmaceutical data-related to opioid prescription, ii) policy data-related to state drug laws, iii) opioid overdose data-related to treatment and treatment results, and iv) employment data-related to training and hiring in the substance use disorder treatment industry (SUDT). As a result, we identified 20 datasets for synthesis and data linkage exploration (see Table 1).
Network Visualization
Network visualizations are widely used to capture the relationship between entities (e.g., co-authorship network or gene-disease networks). They display entities (nodes) and their relationship (edges) in layouts that showcase overall connectivity structure and clusters while avoiding edge crossings. Networks can be extracted from tabular data, e.g., a co-author network can be extracted from a tabulation of papers and the set of authors per paper-co-author links connect all authors that appear in a paper together, creating an undirected weighted network [20]. In addition, each node and edge can be color-or size-coded to visualize additional attributes (e.g., number of papers, number of citations, year of first publication, publication, topic).
To compute a visualization of the 20 datasets, we first converted the csv file with all 20 datasets (rows) and 16 attributes (columns) into two separate files, namely nodelist and edgelist. The nodelist has an additional numeric identifier for each dataset that is used in the edgelist to describe how datasets are linked.
For instance, the ID for CDC Mortality dataset is '0' and the ID for TEDS We used the Force Atlas 2 algorithm in Gephi [21] to layout the network in a 2-dimensional space in a manner that minimizes edge crossings and stress: i.e., interlinked nodes are in close proximity (see Figure 2). Datasets are color-coded to visually render 4 themes: prescription, harms, jobs, and policy. The workflow for creating this network in Gephi is available at GitHub (https://github.com/obscrivn/datasets). The interactive visualization is created using JavaScript GEXF viewer package [22]. The Gephi network is exported from Gephi into a gexf format (.gexf), a native xml format suitable for JavaScript (js) interactive visualization frameworks. Then, gefx.js code is updated and uploaded to GitHub. The interactive solution is available at https://obscrivn.github.io/datasets/ and it supports search, filter, and details on demand [23], as illustrated in Figure 4.
Results and Discussion
The visualization makes it possible to interactively explore key metadata and data interlinkages. Using the online site, users can explore and navigate each dataset by clicking on the node, examining the linked datasets, reviewing the data dictionary, and getting familiar with recent publications using the selected dataset. The collection of relevant scholarly articles helps researchers become familiar with case studies that use these datasets. Figure 4 zooms into the ARCOS dataset.The dark blue color in the legend specifies that this dataset belongs to a pharmaceutical category. By clicking on the ARCOS node, the attribute menu is shown on the left. The dictionary and dataset attributes provide direct links for reviewing the data dictionary and downloading data.
From the data description, a potential user learns that this dataset provides information on drug sales and distribution. To provide relevant information about data usage, the visualization shows three recent scholarly publications using the ARCOS dataset. For instance, the study by [24] rates and high prescription rates have also evidence of large payments to medical practitioners and some negative changes in households?
Conclusion
One key priority laid out by HHS for combating the opioid crisis is better access to data and the encouragement of data-driven (policy) decision making. To assist researchers and policymakers navigating through existing datasets,we developed a dataset and visualization that makes it possible to explore important characteristics and interlinkages of 20 widely used, publicly available datasets.
Going forward, we plan to apply the same methodology to individual-level linked data and non-public resources. A current limitation of the presented work is the fact that the datasets are not updated as new data becomes available. In future research, we will perform regular updates of the datasets and their interlinkages.
Another important area for future work is conducting user studies to identify how to best improve the visualization for different stakeholder groups and what additional datasets should be added. | 2021-02-11T02:15:43.077Z | 2021-02-10T00:00:00.000 | {
"year": 2021,
"sha1": "913cfaeaa89df7e0aa66cf6d9d972fecd183627c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "913cfaeaa89df7e0aa66cf6d9d972fecd183627c",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Economics"
]
} |
251824215 | pes2o/s2orc | v3-fos-license | Continuity through change: populism and foreign policy in Turkey
Abstract Through a discourse-theoretic approach, this paper problematises the under-theorised chameleonic quality of populism. While populist politics is often expressed as construction of the people against the elite, this paper argues that the political should rather be sought in how populism revives itself despite (and through) constant discursive shifts. It examines the interrelations between populism, identity and foreign policy, inserting ‘dislocation’, the transitory moment of disruption in the discursive field, as the main enterprise of populist politics. Empirically, the paper scrutinises how Turkish President Erdoğan switched from conservative democratic to Islamist to nationalist discourses, each with repercussions in the field of foreign policy, and sustained the populist moment through successive dislocations. In particular, it focuses how the ‘Ottoman’ myth spelled different populisms and foreign policy discourses in different periods of the Justice and Development Party (Adalet ve Kalkınma Partisi – AKP) rule.
In August 2013, Egyptian security forces killed hundreds of Muslim Brotherhood supporters who were camping out in Rabaa al-Adawiyya Square, Cairo, to protest the military coup against President Mohammad Morsi. Rabaa soon became a rallying call for Islamist movements across the Middle East. A group of Turkish activists inspired by this cry created the Rabaa sign, a hand with four fingers extended, referring to its literal meaning 'the fourth' (Figure 1). Turkish President Tayyip Erdoğan, the fiercest international critic of the Egyptian crackdown, popularised this hand gesture to show solidarity with Morsi supporters. Rabaa alluded to his undeclared affiliation with the Muslim Brotherhood, as well as his Islamist expansion efforts to back them in various post-'Arab Spring' countries and establish a new, Turkish-led Sunni order in the Middle East.
In an ironic twist, soon after mid-2015, Turkey's neo-Ottoman dreams in the region and hopes for a Kurdish resolution at home faded, and the Rabaa sign acquired a completely different meaning. It was adjusted to suit to the new political climate and dislocated from an Islamist discourse to a nationalist one with overtly ethnic appeals and no direct emphasis ARTICLE HISTORY on religious values. Now, the four fingers symbolised the key principles of Erdoğan's Justice and Development Party (Adalet ve Kalkınma Partisi -AKP): 'one nation, one flag, one homeland, one state' . This motto was adopted as a new article in the party bylaws (AK Parti 2019, 25) and Erdoğan, together with his audience, recited it at the end of each rally to garner support for Turkey's successive military operations against Kurdish political formations within and beyond its borders.
The limits to foreign policy changes are very much imposed by regional and global power structures, as well as the thickness of the ideological elements. Nevertheless, populism, known for its 'essential chameleonic quality' , always assimilates 'the hue of the environment in which it occurs' (Taggart 2000, 4). Populists may conveniently take on the colour of their surroundings based on changing power calculations, as reflected in the divergent uses of the same Rabaa sign in the hands of Turkey's populist leader. This chameleon-like characteristic of populist actors in both domestic and foreign policy has already been empirically shown in the literature (Mikucka-Wójtowicz 2019; Muis 2015), though it still awaits extensive theorisation. This study problematises the oft taken-for-granted elusiveness and 'empty shell' composition of populism, and essentially argues that populism indeed sustains and re-generates itself through this volatile change.
Despite the extensive conceptual debate over populism, definitions agree on two of its components: anti-elitism (the antagonistic framework dividing a society vertically between the people and the elite) and people-centrism (politics as the expression of the general will and demand to restore popular sovereignty) (Aslanidis 2016). Due to its elusiveness, populism can only be identified in its outcomes. This article particularly examines its implications in foreign policy. In the scholarship on populism, the 'people vs elite' binary has largely been studied within a national boundary. Nevertheless, identities are constantly negotiated, challenged, and restated through discursive practices, including articulations of foreign policy. The purpose here, then, is not to identify and explain state behaviours, but to demonstrate the populist logic of this 'co-constitutive' relationship between representations of 'the people' and foreign policy (Hansen 2006, xii). Moreover, it is also a fruitful exercise to study chameleonic change in a field where such dramatic shifts are less expected since foreign policy is relatively insulated from other governmental agencies and largely rests on established principles and diplomatic traditions. 1 Studies exploring the existence of a populist foreign policy vary widely, from an outright rejection of its reification (Balfour et al. 2016) to several categorisations of its sub-types (Chryssogelos 2021; Verbeek and Zaslove 2018). 2 In particular, different conceptual genera lead to different research questions in the study of foreign policy. The ideational approach, for instance, defines populism as a thin ideology and primarily seeks to answer the 'what' question, such as the populist elements in foreign policy formulation and their measurement (Liang 2007;Mudde and Kaltwasser 2017;Verbeek and Zaslove 2018). The strategic approach considers populism a political strategy and tends to deal with the 'why' question, focusing on populist leadership and mobilisation in foreign policy decision-making (Weyland 2001). Lastly, the discourse-theoretic approach defines populism as a political logic, denying any substance to the phenomenon, and shifts the focus from the contents of populism to how populism articulates those contents (De Cleen and Stavrakakis 2017;Laclau 2005a). Consequently, it pursues the 'how' question behind foreign policy discourses.
This article, too, adopts a discourse-theoretic approach to understand the interrelations between populism, identity and foreign policy. Nevertheless, it first challenges the static conceptions of 'populist foreign policy' and discusses the neglected foreign policy change in populist settings. Populist variations should not only be sought across space in the multiple sub-types of foreign policy, but also across time in its re-orientations within a single case. Second, this study uses the foreign policy debate as an entry point to understanding populist politics in broader terms. References to the chameleonic quality of populism are not scant in the academic literature, but it is instead seen in the diversity of forms and ideologies populisms can adopt across the world (e.g. Aslanidis 2016, 89;Bonikowski et al. 2019, 60). This article explicitly focuses on how populism of the same populist leader, political party, or movement may shape-shift over time, depending on the context. While populism is often understood as the construction of the people against the elite, this article argues that the political should rather be sought in how populism revives itself despite (and through) constant shifts and disruptions in the discursive field. For this task, it, third, seeks to unearth the constitutive role of 'dislocation' in Laclauan terms along the identity-foreign policy nexus that is central to the making and unmaking of the populist moment (Laclau 1990). It revisits the concept and calls for further de-essentialisation and dynamisation to better grasp the volatility of populism.
Turkish populism aptly illustrates the constant foreign policy shifts under populist rule. Erdoğan has attracted much scholarly attention as one of the forerunners of the current populist wave, and the AKP rule since 2002 offers a broad timespan for researchers to investigate the populist impact in foreign policy as a site of constant power struggles and dramatic discursive shifts (Alpan 2016;Balta 2018;yalvaç and Joseph 2019). Although the scholarship refers to the AKP (and Erdoğan) as a singular, coherent actor, there have actually been multiple AKPs since its inception in 2001. The party is in constant flux. Depending on volatile political contexts and power calculations, the party self-admittedly wriggles out of its old skin, as evidenced by major policy shifts and constant changes even to the list of its founding members (Hacısalihoğlu 2017). Pointing to this ever-changing state of the party's identity, the article empirically clarifies how Turkish populism, as performed by the AKP, has articulated different foreign policy discourses -be they liberal international, Islamist or Turkish nationalist -while maintaining continuity under the populist umbrella.
To this end, the article opens up the basic tenets of the Laclauian approach to populism, discusses the central role of dislocation in retaining the populist moment, and offers a revisionist take on the concept. With that theoretical framework, the article moves to illustrate the AKP's shifting discourses from conservatism to Islamism to nationalism, as reflected in identity and foreign policy. In particular, it focuses on how the 'Ottoman' myth spelled different populisms and foreign policy discourses in different periods of AKP rule. This paper focuses on the time frame from 2002 to 2021 and analyses official foreign policy texts, newspapers, and academic and popular writings.
The populist configuration of politics and the constitutive role of dislocation
The elusiveness and vagueness of populism have led many academics to challenge the analytical value of the term and even avoid using it altogether (Herkman 2017, 471). However, it is exactly this elusiveness that brings populist politics into existence and enables it to accommodate multiple heterogeneous social demands that otherwise would not join. In Laclau's words, 'Populism's relative ideological simplicity and emptiness, […] which is in most cases the prelude to its elitist dismissal, should be approached in terms of what those processes of simplification and emptying attempt to perform ' (2005a, 14).
By its profoundly political focus, Laclau's discourse-theoretic approach differs from others treating populism either as a symptomatic effect of some objective causes (e.g. economic globalisation, socio-cultural changes) or as a thin ideology comprising particular ideas and beliefs (Mudde and Kaltwasser 2017;Rodrik 2018). A movement cannot be labelled populist on the basis of its ideology alone, 'but because it shows a particular logic of articulation of those contents -whatever those contents are ' (2005b, 33). Populism is, rather, a logic or reason: a distinctive set of discursive features organised around constructing a collective subject, the people, and articulated through an antagonistic political frontier that pits the people (the underdog, the common man, the little man) against the elite (the establishment, the regime, the imperialists etc.). This political frontier is drawn among a multitude of indeterminate ideological elements (the floating signifiers) that can be woven into any particular ideological formation (Laclau 2005a, 110). The analytical fulcrum of this theory is the role of the equivalential links built between dispersed social and political demands through nodal points, privileged points of signification, and empty signifiers, voicing them as a more universal opposition against the elite as a whole.
In the infinitude of the social, such chains of equivalence are neither random nor predetermined. The populist moment is improper and unstable 'because it tries to operate performatively within a social reality which is to a large extent heterogeneous and fluctuating' (Laclau 2005a, 118). Michael Hauser pushes this point further when raising his criticism against Laclau that the signifying chains today are even more 'discontinued, decentralized, and dispersive' , and rather appear to be a heterogeneous assemblage of semantic segments only tied with the signifier of the populist leader (2018,74). This contingency and structural undecidability, leading to a continuous process of (re-)signification, is especially important to grasp the dynamics of change in populism. Here, dislocation occurs when the structure cannot semanticise the new, and the possibility of signification reaches its limit as a structural failure in an encounter with the Real. Once the structure is dislocated, rival forces fight for recomposition and re-signification of the nodal points (Laclau 1990, 40-41). This constitutes the logic of displacement of political frontiers. New political frontiers can destabilise and shift power blocs and lead to the proliferation of floating signifiers; therefore, the partially fixed moments of discourse can be tethered to disarticulations and re-articulations by rival projects (Laclau 2005a, 153). Dislocation is then both traumatic/disruptive and productive, serving as the ground for new identities (Laclau 1990, 39). As such, by making new strategies and discourses available for signification, dislocation puts forward the political agency.
In a 1993 interview, Laclau criticises his earlier emphasis on antagonism in Hegemony and Socialist Strategy and argues for the analytical primacy of dislocation, because dislocation precedes both antagonism and articulation (quoted in Stavrakakis 2003, 324). However, this concept almost disappears in his 2005 work on populism. With an emphasis on antagonism, the Laclauian theory of populism does not differ from his general theory of the political constitution of group identities. Thus, several scholars have called for its revision (Aslanidis 2016;De Cleen and Stavrakakis 2017). This study too adopts a revisionist approach, arguing for dislocation to be the main enterprise of populist politics. Re-inserting dislocation will contribute to exploring the peculiar dynamics of populist politics. This, however, requires a re-adjustment of the concept. Mostly associated with the organic crisis in the Gramscian approach, dislocation appears as a response to the unavoidable failure of the structure when discourse encounters the Real but reaches the limits of its meaning. However, as Benjamin Moffitt argues, crises are not objective phenomena, but both 'mediated' and 'performed' by political actors (2015,190). The study of populism, recalling its chameleonic qualities and ever-shifting chains of equivalence, requires an even more dynamic and de-essentialised understanding of dislocation that problematises the conception of the Real. More importantly, dislocation may not necessarily result from a failed structure encountering the Real. Rather, it may hint at a populist governance technique to exploit new opportunities and sustain the hegemonic moment via transformation in a volatile context. In Laclau's words, 'the agents themselves transform their own identity insofar as they actualise certain structural potentialities and reject others ' (1990, 30), but this is not impinged on by a failed structure. Thus, dislocation is neither objective nor necessarily a failure of the established reality. It is, rather, intradiscursive and constitutive, allowing 'conversions of articulatory practices and accompanying shifts in public discourses, which can then be used as a platform for a hegemonic intervention' (Nabers 2019, 275).
The intradiscursive dislocations as a break in hegemonic discourse may lead to floating signifiers that can be captured by counter-discourses (Stavrakakis et al. 2018). Nonetheless, this does not mean anything is possible, since dislocation occurs over an existing structure; it entails the reactivation of previously sedimented nodal points (Wodrig 2018). With this operation, some existing demands are extended or taken up in another, and some formerly excluded demands may be incorporated into the camp. After dislocutory moments enable the creation of new antagonistic frontiers, new discursive elements such as empty signifiers, fantasies or myths are also to be stimulated in order to suture the dislocated structure and conceal dislocation. They function not only as the source of diverse collective social imaginaries but also as 'surfaces of inscription' , over which dislocations and various demands can be inscribed (Laclau 1990, 63-67).
Overall, one should be aware that populism is not a synonym for 'the political' as Laclau suggests (2005a), nor is dislocation exclusive to populism. Differently from various forms of doing politics, populism articulates the society in a vertical axis and brings up a particular set of affective appeals, such as animosity towards the elite and painting the people as the underdog (De Cleen and Stavrakakis 2017). As such, dislocation can be observed in nonpopulist politics, too, 3 but it appears inherent to populism. Compared to nationalism or other ideologies, populism is 'something political actors do, not something they are' (Bonikowski et al. 2019, 63). The emptiness and lack of programmatic scope allow for an intrinsic fluidity that could be filled with shifting contents. While this populist propensity to change may not always bring actual policy change, depending on the context and power positions, it looms as a political logic to ensure continuity.
Dislocations and foreign policy changes in AKP populism
Foreign policy change can be observed at multiple levels, varying from simple policy adjustments over a single issue to fundamental re-orientations (Haesebrouck and Joly 2021). Numerous scholars have addressed the question of how strongly the AKP era represents a rupture in the historical course of Turkish foreign policy. This is especially true of the so-called 'shift of axis' debate, problematising Turkey's increased engagement with the Middle East, along with Euroscepticism towards the end of the first decade of the 2000s (Alpan 2016;Kösebalaban 2011;Öktem, Kadıoğlu, and Karlı 2009;Onar 2016). The literature often marks a sharp distinction between pro-European union (Eu) democratic 'good old days' in the early years and the creeping Islamist anti-Western authoritarianism in the second half of AKP rule; however, several other studies countered this dichotomy and aimed to integrate the AKP's strategies in continuity with the larger course of Turkish foreign policy (Hatipoglu and Palmer 2016;Hoffmann and Cemgil 2016;Özpek and yaşar 2018). While the AKP's foreign policy choices may make sense within Turkey's broader grand strategies (Aydın 2020), this study observes multiple ruptures within the AKP's foreign policy discourse. In fact, it considers the discursive dislocations a constant and constitutive element of AKP populism.
The dislocations entail re-linking disparate demands to create new antagonistic frontiers and re-signifying the elements to reach or maintain the moment. Therefore, their analysis requires us to capture the ideological elements the AKP employed in the reconfiguration of those disparate demands. These are conservatism, Islamism and Turkish nationalism, which Tanıl Bora describes as 'three phases of the Turkish Right' (2007). In the Turkish practice, the boundaries between these three ideologies have been quite flawed, enabling diverse mutations. 'Islamism is our water of life. Nationalism is the ice, and conservatism is the steam. All three nourish the same soil. The name of those soils is Turkey' , a pro-government columnist states, referring to this interpenetrable nature in AKP politics (Küçük 2017). The co-occurrence of these three ideologies generates a discursive heterogeneity, as the dislocation is never total, either. Nevertheless, in each period, one ideology comes forward as the dominant framework informing the populist discourse and foreign policy. While the populist template of the disadvantaged, inherently virtuous people against the corrupt elite holds after each dislocation, the people and the elite are re-signified each time according to those changing ideological elements.
In the Turkish case, such dislocations are concealed and sutured by several empty signifiers, but mainly by the Ottoman myth, which also serves as a surface of inscription revealing the accompanying shifts. The Ottoman is the heartland of AKP populism. In AKP discourse, the glorification of the imperial era and self-flattering references to the good old times of 'şanlı ecdad' , the glorious ancestors, are more frequent than ever, in contrast to the mainstream Kemalist conception of the Ottoman as a corrupted episode of decline (Aydın-Düzgit, Rumelili, and Topal 2022). AKP populism has always been all about the Ottoman; however, what the Ottoman past signifies has been dislocated multiple times. In a 'restorative nostalgia' , this myth has selectively informed the dominant articulations of 'the people' in the AKP's pursuit to revive the past (Boym 2001). The following analysis shows how Turkish foreign policy shifts can be read through the lens of dislocation in AKP populism, as reflected in the constant re-signification of its nodal points and myths ( Figure 2).
AKP populism as conservative democracy
The 1997 military intervention shaped much of Turkish politics in the 2000s by drawing the antagonistic front line between the 'defensive nationalists' and 'conservative globalists' (Öniş 2007). Along with fears that the Eu accession process would grant more space to political Islam and Kurdish nationalism -the twin enemies of Kemalism -the intervention accelerated the militarisation and anti-Westernisation of secular groups. A loose but tangible form of Kemalist mobilisation called the ulusalcı front (Ulusalcı cephe) flourished, bringing diverse actors together, such as members of Turkey's centre-left Republican People's Party (Cumhuriyet Halk Partisi -CHP), the far-right Nationalist Action Party (Milliyetçi Hareket Partisi -MHP), and various secular civil society associations (Çınar and Taş 2017). Another consequence of this intervention was the moderation of political Islam, either ideologically or tactically, in the face of state repression. The AKP founded in 2001 embodied this transformation.
Constructing politics in binary categories such as 'state vs society' , 'White Turks vs Black Turks' , 'Istanbul vs Anatolia' or 'happy minority vs silent majority' , the AKP was built on an anti-elitist and anti-establishment discourse. 4 Its populist politics was about the construction of 'the people' via an equivalential chain weaving together heterogeneous demands through its opposition against an illegitimate elite, ie the Kemalist political and bureaucratic elite who ran the country for decades (Ongur 2018; yalvaç and Joseph 2019). Erdoğan popularised the term 'CHP's oppression' ('CHP zulmü'), a term characteristic of the Turkish centre-right, which marks the recent past by the perpetual victimhood of conservative Anatolian people under the oppressive secular bureaucratic elite (Milliyet 2014). He repeatedly expressed his scorn towards the political and cultural elite who 'drink their whisky on the Bosphorus […] and hold the rest of the people in contempt' (Bucak 2014). Likewise, he presented himself as the voice of the oppressed and gave a 'one of us' image when proclaiming: 'In this country, there is segregation of black Turks and white Turks. your brother Tayyip belongs to the black Turks' (Özkök 2004). This is how AKP populism managed to present its ascendance to power as a revolution from below, compared to the top-down modernisation project of Kemalism. AKP politics performed a passive revolution by incorporating wider conservative groups into the neoliberal system (Tuğal 2021). This notwithstanding, it was a populist hegemonic project that claimed the space for representation of all groups disadvantaged by the Kemalist system. To that end, the AKP invented the empty signifier 'Türkiyelilik' (literally 'being from Turkey') as an all-inclusive supra-identity under which Kurds, Islamists and liberals might all find space (Hürriyet 2003).
To cleanse itself of Islamist stripes and achieve a legitimate standing in the national and international arena, the founders of the AKP opted for 'conservative democracy' modelled on the Christian Democrats in Europe. The term was coined by yalçın Akdoğan, then-Chief Advisor to Prime Minister Erdoğan, in a key text that attempted to situate the AKP within the Western tradition of conservatism (2003). While avoiding any single reference to Islamist figures such as Necip Fazıl Kısakürek, who once inspired many of the AKP cadres, Akdoğan tried to create a new genealogy and made constant references to Western thinkers such as Michael Oakeshott, Edmund Burke and Friedrich Hayek, occasionally linking them to Turkish conservative intellectuals such as Nurettin Topçu and Ali Fuat Başgil (2003). until the 1990s, a rejectionist discourse determined to combat Westoxification in politics and society was dominant among Turkish Islamists. Now, conservative democracy was taking Europe and European values as the normative framework. Integration with the Western world and embracing the plurality of society as a richness were common motives, as Erdoğan stated: 'I want to see Turkey making a meaningful contribution to the mosaic of cultures that one observes in Europe. My motto is a local-oriented stance in a globalizing world ' (2004). AKP populism articulated a cosmopolitan discourse targeting integration with Europe and the rest of the world in political, economic and cultural terms (Alpan 2016, 16).
In its initial years, the AKP pursued a people-centric politics of hope towards a more democratic and powerful country, while also realising a concrete agenda mostly driven by the International Monetary Fund (IMF) and Eu. Prior to the AKP's ascendance to power, the declaration of Turkey's candidacy status at the 1999 Helsinki Summit brought a wave of constitutional reforms on human rights and freedoms. Later, at the Copenhagen Summit of December 2002, the Eu's conditional decision to open accession negotiations led to widespread euphoria and an increasing belief that Turkey was destined to join the Eu. This incentive of full membership resulted in the parliament passing successive harmonisation packages to fulfil the Copenhagen political criteria. Europeanisation became the main paradigm, overhauling the civil and penal codes in line with the acquis and leaving its imprints on all facets of social and political life, from civil-military relations to corruption and employment (Aydın-Düzgit and Kaliber 2016, 3).
While maintaining Turkey's pro-Western orientation and European anchor in a liberal internationalist framework (Balta 2018), this was also the era when the AKP, with its likeminded foreign ministers yaşar yakış, Ali Babacan and Abdullah Gül, in succession, systematically endeavoured to de-securitise its foreign policy issues and employ its soft power assets such as economic interdependence and promoting mediation roles in regional conflicts (Altunişik and Martin 2011, 571). Turkish foreign policy pursued 'active globalization' , targeting the construction of regional and global networks of shared interests (Öktem, Kadıoğlu, and Karlı 2009, 21). The AKP became over time more vocal in Middle Eastern affairs too. Apart from symbolic moves such as the appointment of Turkish diplomat Ekmeleddin Ihsanoğlu as the Secretary General of the Organization of the Islamic Conference in 2005, Turkey assumed mediating roles, such as during the 2008 Golan Heights conflicts and the 2009 free trade agreement between Israel and Syria (Kösebalaban 2011, 176;Özerim 2018, 173). Nevertheless, Eu relations remained the top priority.
Compared to the subsequent periods, references to the Ottoman past are less frequent or explicit in the AKP's early years under the military's dominance over politics. Akdoğan's book, for instance, does not even contain the word 'Ottoman' and instead underlines that the AKP's political stance originates from a globally established practice, not from the past or a civilisational background (2003,6). This precautious take results from the much-contested status of history, particularly the Ottoman episode, in Turkish politics. For instance, when then-Istanbul mayor Erdoğan was removed from office in 1998, the articles on the website of the municipality, referring to Istanbul as the capital of the Ottoman-Islamic civilisation, were also removed out of fear of the military's wrath under the shadow of the 1997 intervention (Çınar 2005, 189). Nevertheless, after the AKP came to power in 2002, its political elite occasionally used the Ottoman myth to cherish the liberal ideal of pluralism. Through references to the Ottoman millet system to manage religious and ethnic diversity, the myth signified a harmonious, plural world. Establishing 1453, the Conquest of Istanbul, as the founding moment of Turkish history, Erdoğan frequently praised the Ottoman Istanbul for hosting multiple beliefs and cultures together (ANKA 2008;Çınar 2005). This articulation enabled the AKP elite not only to frame the Western liberal principles as a home-grown idea, but also to challenge the Kemalist modernisation from a 'legitimate'/Western vantage point for eradicating the plural texture of the society. The Ottoman practice, arguably reconciling Islam and multiculturalism, was invoked as an antidote to the Huntingtonian 'clash of civilisation' thesis. Reviving the Ottoman tolerance (Osmanlı hoşgörüsü), Turkey could be a beacon of co-existence in an otherwise fragmented region (Albayrak 2007). In the post-9/11 world, Turkey, a Muslim-majority country, becoming an Eu member or co-sponsoring the united Nations 'Alliance of Civilizations' would further reinforce this cosmopolitan discourse. Nevertheless, as Menderes Çınar notes, this global emphasis on the AKP's or Turkey's Muslim identity also maintained the civilisational outlook, located Turkey as a representative of a different (Islamic) civilisation, and increased stress on Islam in the definition of Turkish national identity both at home and abroad (Çınar 2018, 183-184).
AKP populism as Islamism
Pinpointing when the AKP drifted away from its reformist European route to other shores is a controversy among students of Turkish politics. The variance in the breaking points offered in the academic literature, such as 2005 (Aydın-Düzgit and Kaliber 2016(Alpan 2016), 2009(Onar 2016) or 2010-2011(Çınar 2018, indeed epitomises the ambivalence of Turkish foreign policy discourse that tries to articulate heterogeneous demands simultaneously. Towards the end of the first decade of the 2000s, Turkey witnessed a dramatic power shift, challenging the long reign of the Kemalist establishment. The secularist/Kemalist hold in the military and judiciary was neutralised not only by the civilianising reforms along with the Eu acquis, but also the 2008 Ergenekon and 2010 Sledgehammer Trials and 2010 Constitutional Referendum. This left the dominant party in the parliament, the AKP, without any mechanism to challenge itself (Taş 2015, 781). The logic of equivalence was again articulated by a populist discourse against the Kemalist elite, and more specifically its collusive networks, ie the 'deep state' , which those trials were supposed to investigate. Nevertheless, the antagonistic frontiers were dislocated according to the change of power. Some AKP figures, more confident about the party's electoral power and less needy of the legitimacy derived by the support of liberal intellectuals and the Eu leaders, began stating that their collaboration with the liberals had ended and the new era would be different (T24 2013). The new populism was given an Islamist twist that is constantly signified by elements of Sunni victimhood across the region. Inclusionary depictions of Anatolia as a mosaic culture left the discourse to a civilisational re-articulation of Turkey as a Sunni Muslim nation.
Parallel to the re-signification of the people with dominantly religious elements, the new revisionist foreign policy discourse took a Eurosceptic turn. Several external factors contributed to this discursive shift; for instance, Eu actors' repeated statements about the possibility of a 'privileged membership' instead of a full membership, their blockage of several negotiation chapters because of the Cyprus Conflict, and the growing Islamophobic sentiments across Europe led only to more resentment against the Eu. Moreover, Turkey intended to diversify its scope and pursue alternative foreign policy paths to survive the Great Recession (2007)(2008)(2009) already gripping the West. In that regard, the Arab uprisings overhauling the Middle East appeared to be a political bonanza for Turkey to exploit the new geopolitical opportunities.
The discursive shift roughly coincides with Ahmet Davutoğlu's appointment to the office of Foreign Minister in 2009. The new era was boldly marked by his treatise on foreign policy, Stratejik Derinlik (Strategic Depth), which outlined the blueprint of a grand vision for post-Cold War Turkey (2001). Also known as the Davutoğlu Doctrine, it meant the establishment of Pax Ottomana by invoking historical and religious connections in the formerly Ottoman territories. While Eu membership was still a strategic goal, the AKP approached it purely in pragmatic terms (Alpan 2016). The Eu now became only one aspect of Turkey's multi-dimensional foreign policy. While this new discourse challenged the normative superiority of Europe, defining Turkey as part of another civilisation slowed down the reformist momentum. Davutoğlu's distinction between the Western and Islamic civilisations is clearer in his dissertation Alternative Paradigms: The Impact of Islamic and Western Weltanschauungs on Political Theory (1994). Here, Davutoğlu argues for essential, rather than political, differences between the two civilisational paradigms. Nevertheless, the pan-Islamist and pan-Ottoman expansionist foreign policy was balanced by a pro-Western realism (Onar 2016).
According to Davutoğlu, Turkey, no longer content to be a junior/regional player, had a historic mission to lead the looming transformation in its region and was destined to be a global power and a 'central state' due to its history. The emphasis on the 'central state' was an implicit and revisionist critique to the post-Cold War foreign policy assigning Turkey a passive 'bridge' role. This revisionist tone was later boldly manifested in the slogan 'The world is greater than five' that Erdoğan used at the united Nations (uN) General Assembly in September 2014, directed against the five permanent members of the uN Security Council (Al Jazeera Turk 2014). Turkey should be an 'order-setter' due to its 'strategic depth' that rests on two pillars: historical depth and geographic depth (Davutoğlu 2001). In a paper titled 'Principles of Turkish Foreign Policy and Regional Political Structuring' , Davutoğlu also identified the basic pillars of new foreign policy as 'rhythmic diplomacy, multi-dimensional foreign policy, zero problems with neighbors, order instituting actor, international cooperation, or proactive foreign policy' (Davutoğlu 2012, 4). A zero-problem foreign policy entailed bold activism aimed at establishing economic and security co-operation with neighbours. In tandem with this policy, the new National Security Policy Document (Milli Güvenlik Siyaset Belgesi), commonly referred to as the Red Book, no longer identified Russia, Iran, Iraq and Greece as existential security threats (Kösebalaban 2011, 152). The other pillars, multi-dimensional foreign policy and rhythmic diplomacy, were meant to expand Turkey's political, economic and cultural reach in a flexible and dynamic approach. To enhance Turkey's soft power assets, several official bodies, such as the Presidency for Turks Abroad and Related Communities (Yurtdısı Türkler ve Akraba Topluluklar Baskanlığı -yTB) and yunus Emre Institutes (Turkish cultural centres), were installed to serve this proactive foreign policy. The Turkish Cooperation and Development Agency (TİKA), providing developmental and humanitarian aid to several countries in the region, was restructured by a 2011 decree (KHK/656), increasing its geographical reach and capacity (Sevin 2017, 145-147).
The AKP's work to strengthen ties with the Middle East always had an anti-Kemalist sentiment, as it responds to the Kemalist establishment's alleged negligence of the region. unlike the Kemalist opposition's criticism of the government for dragging the country into 'the Middle East swamp' , Davutoğlu elevated the region into ontological and religious status, depicted it as 'the center of sacred revelation' , and assigned Turks a mission of re-civilising the region (Daily Sabah 2014). In this regard, Davutoğlu read the Arab uprisings within an Islamist framework and saw them as an opportunity to empower the suppressed Islamic groups and restore a Muslim civilisational identity to the region under Turkey's leadership (Ozkan 2014).
The temporary arrest of a basically unsteady structure with an Islamist populist discourse permitted the AKP to maintain its power, but the Ottoman myth, as a surface of inscription, reveals the dislocation. What the Ottoman signified shifted from a pluralist world to the rejection of Europe as another civilisation and the leadership of the ummah. The muted Ottoman pluralism left its place to a more pan-Islamist conception that equates nation and Islam. In this re-signification and re-romanticisation of the Ottoman, its unique past grants Turkey a hierarchically superior position and the self-declared rightas well as the duty -to speak on behalf of the silenced regimes in the region. This is not a relation among Muslim fellows on an equal footing but based on an exceptionalist understanding entrusting Turkey with a historic mission: helping the oppressed in the face of plots and treachery, and rehabilitating the region suffering from the void left by the demise of the Ottoman empire. 'As in the 16th Century, when the Ottoman Balkans were rising, we will once again make the Balkans, the Caucasus and the Middle East, together with Turkey, the center of world politics in the future' , Davutoğlu said, and set the goal of Turkish foreign policy as follows: 'On the historic march of our holy nation, the AK Party signals the birth of a global power and the mission for a new world order' (Bekdil 2015).
The AKP also tried to suture the dislocation by inventing empty signifiers, such as 'advanced democracy' ('ileri demokrasi') and 'New Turkey' ('Yeni Türkiye'). Advanced democracy aimed to strip off 'conservative democracy' from the universal liberal norms but emphasised the distinct local roots (Alpan 2016). Likewise, the idea of 'New Turkey' as another empty signifier was put into circulation around 2010. It served as a utopia, discrete and ambivalent, but an appealing dream for all (Hürriyet 2013). These signifiers aimed to discursively sustain the progressive politics of hope the AKP pursued in the very beginning.
With an adamantly secularist and nationalist position, the main opposition, CHP, under Deniz Baykal, long reaped the benefits of political polarisation and did not initiate a rival hegemonic project. His successor, Kemal Kılıçdaroğlu, was more compromising, as in the case of placing Ihsanoğlu as the joint candidate of the opposition in the 2014 presidential elections. Nevertheless, his modest manoeuvres to transform the party largely failed (Tuğal 2021, 47), and, already incapacitated by the anti-coup trials and repressive environment that followed, the political opposition generally did not succeed in building a new antagonistic line that goes beyond the old ulusalcı articulations. The AKP's neoliberal political and economic project, however, faced substantial resistance on the streets. From the general strike of at the Tobacco, Tobacco Products, Salt and Alcohol Enterprises (TEKEL) (2009)(2010) to the 2012 Republic Day marches to the 2013 Gezi Protests (Ongur 2018, 51), the massive street mobilisation would challenge the AKP rule and pave the way for another shift in its populism.
AKP populism as Turkish nationalism
The 2013 Gezi protests, which began on 28 May as an environmental reaction to the demolition of Gezi Park at Taksim Square, Istanbul, exploded into mass demonstrations against the government. Leaving aside the low-hanging fruits of the so-called 'Arab Spring' that was soon suppressed by authoritarian regimes, the AKP elite now had to worry about the coming of the 'Turkish Spring' against them. With the Kemalist establishment subdued in domestic politics, Erdoğan had to demonstrate extraordinary dexterity in fashioning an elite, against which an equivalential chain among heterogeneous demands could be pitted. The elite in this new populism came from outside, ie the Western countries, global financial centres, international institutions or, overall, the international 'mastermind' ('üst akıl'), which is determined to divide and conquer Turkey (yeni Akit 2016). Depictions of Anatolia as a mosaic culture in the first period of AKP rule switched first to a civilisational re-signification of the people as comprising a Muslim nation, and then to a nativist articulation as a Muslim Turkish nation, which was reproduced in domestic and foreign policy realms.
The AKP was now holding the reins of state power; however, the two years from the beginning of Gezi Protests in May 2013 to the June 2015 elections, when the AKP lost the parliamentary majority for the first time, were quite challenging for the government. In addition to the continuing waves of the Gezi Protests across the country, the 17-25 December graft probe into Erdoğan's entourage the same year and the all-out war between the Gülen Movement (GM) and the AKP put further strain on Turkish politics (Taş 2018). Moreover, not unrelated to Kurds' lack of support for a presidential system, the Kurdish resolution process also came to an end following the consecutive terrorist attacks in Suruç and Ceylanpınar that killed several Kurds and policemen, respectively, in July 2015. While Erdoğan parted with earlier stakeholders -first liberals, now the Gülenists and the Kurds -he responded to the growing opposition by mobilising the myth of a liberation war against Western imperialists and its domestic pawns. Amidst shifting power blocs, this helped him consolidate his electoral base (Destradi, Johannes, and Taş 2022).
In his early years, Erdoğan frequently stated that he trampled on all nationalisms, and he reiterated this until 2013 (Erdoğan 2013). However, the rise of transnational Kurdish irredentism provided the grounds for the AKP's new alliance with the MHP and anti-Western Kemalists around a nativist nationalist discourse (Christofis 2022). Ankara saw the formation of Rojava, the de facto autonomous Kurdish administration in Northern Syria, as a direct threat to its territories. In this regard, the repeated cross-border operations in Syria and Iraq punctuated a new elite pact. yet they also sparked a rally-around-the-flag effect, which Erdoğan has effectively used in all the polls since the November 2015 general election (Çevik 2020a). Amidst the rising nationalist fervour, the new antagonistic frontier became most visible during the 2019 local elections, when Erdoğan designated his cooperation with the right-and left-wing ultranationalists as the People's Alliance (Cumhur İttifakı) and the political opposition, including the Kurds and Gülenists, as the Alliance of Despicables (Zillet İttifakı) (Erdoğan 2019).
After Mevlüt Çavuşoğlu's 2015 appointment as Foreign Minister, and especially following Davutoğlu's forced resignation from his leader post in May 2016, President Erdoğan increasingly micromanaged foreign policy along with conspiratorial discourse. In the aftermath of the 15 July 2016 abortive coup, he resorted to fierce anti-Western rhetoric and strengthened its discourse of war against Western imperialists and their domestic collaborators. 'Turkey is witnessing its biggest struggle since the war of independence' , Erdoğan warned, and called for Turks to prepare to fight for 'a united nation, a united fatherland, a united state' (von Schwerin 2017). In particular, the united States' support for Kurdish fighters in Northern Syria against the Islamic State (ISIS) or refusal to extradite the GM's Pennsylvania-based leader Fethullah Gülen signalled varied security frameworks, among a variety of reasons (Balta 2018). While the AKP government increasingly took unilateral aggressive actions, such as its gas drilling activities in the Eastern Mediterranean basin, it also sought to balance the Western powers via rapprochement with Russia and China. In this period, Turkey's purchase of the Russian S-400 air defense system was widely seen as a drifting away from its historic transatlantic security structure represented by the North Atlantic Treaty Organization (NATO). Erdoğan also occasionally brought up his intention to join the Shanghai Cooperation Organisation (SCO) as an alternative to the Eu; however, Turkey's Euroasianist vision had its political limits as well due to conflicting security interests (Kubicek 2022).
The strategic relationship with the Eu was replaced by a transactional one. As a reflection of this quid pro quo approach, Turkey once again used its geopolitical location as a gatekeeper for the Eu and a buffer zone between the latter and millions of Middle Eastern refugees and militants. While immediate concerns such as the 2016 refugee deal and counterterrorism prevented European leadership from putting strong pressure on Turkey's anti-liberal practices, such a transactional relationship helped Erdoğan's regime sustain its legitimacy. Nevertheless, amidst the ebbs and flows of Turkish foreign relations, Erdoğan's ambitious foreign policy activism and later efforts to save the day have resulted in a series of sharp foreign policy reversals and eventually in Turkey's 'precious loneliness' , which he has sought to overcome by mending fences with regional powers, especially after 2020 (Dalacoura 2021).
The dislocation can be seen again in the re-signification of the Ottoman myth. The earlier emphasis on the glory of the empire and its position as the leader of the ummah faded. As reflected in many drama series aired on pro-government TV channels during these years, the Ottoman was now signified in a defensive frame built around cult figures like Sultan Abdulhamid II as the target of Western imperialists and the nation's survival struggle (Çevik 2019). Anti-Western nationalist fervour is evident in the re-signification of the people in a liberation war, as epitomised in the last chapter of the empire. According to Erdoğan, 'World War I has not yet ended' , so the task of saving the Ottoman Empire from Western imperialists and their domestic collaborators, like the Arabs, is still there (2017). The war is quite multifaceted. For instance, Erdoğan compared Turkey's economic crisis to the capitulations and debt spiral that led to the collapse of the empire a century ago: 'We are now waging a struggle of historic importance against those who seek to yet again condemn Turkey to modern-day capitulations through the shackles of interest rates, exchange rates and inflation' (2020). The AKP also 'discovered' the battle of Kut al Amara (1915)(1916), an Ottoman victory against the British forces during World War I, and invented the tradition of celebrating its anniversaries since its centennial in 2016 (Milliyet 2016).
In the securitised context of post-coup Turkey, the AKP heavily repressed equally valid alternative discourses and left little room for the fractured opposition to survive. However, the opposition's solidarity and victory in the 2019 Istanbul mayoral elections demonstrated the limits of the AKP's coercive and consensual tactics to sustain its rule. In particular, the new mayor Ekrem İmamoğlu, who bought the iconic painting of Ottoman Sultan Mehmed II and hosted the descendants of the royal Ottoman family in 2020, challenged the AKP's monopoly over the Ottoman myth (Karar 2020).
Conclusion: AKP populism from unity to uniformity
unlike Laclau's emphasis on the unifying equivalential chains, this article argues that dislocation plays a central role in retaining the populist moment. However, this requires a different take on dislocation. Instead of using the concept to refer to a crisis or structural failure as the root cause (or beginning) of populism, one may approach dislocation as an intrinsic element of populism, building on the latter's chameleonic quality. In the case of Turkey, the Ottoman as a surface of inscription reveals such successive dislocations from conservative to Islamist to nationalist populisms. Rather than analysing the AKP era as a whole or in two broad episodes (democratic and authoritarian), one should instead consider these intra-discursive dislocations as an inherent and constant element of Turkish populism. yet the same signifiers (the nodal points, the empty signifiers, the political leader, the myths) function like glue that cements over fissures, alterations, shifts and ruptures, and enables continuity in the populist moment.
According to Laclau, the 'people' becomes 'intensionally poorer' the more it extends and encompasses more heterogeneous social demands, so it becomes a 'tendentially empty signifier' (Laclau 2005a, 96). Turkish populism demonstrates a reverse tide, in which the meaning of the 'people' slides from a unity-driven, broad discourse to a uniform construct. Despite including the demands of the ultranationalists, 'the people' in AKP populism today signifies a more restricted equivalential chain. Former foreign ministers Babacan and Davutoğlu, who represented the conservative democratic and Islamist wings of the AKP, cut off their relationship with the party and formed their own splinter parties, abandoning the AKP to nationalist discourse. With the weakened flexibility in the party's political discourse, Erdoğan instead incorporated external alternatives such as the centre-right Tansu Çiller and Islamist Fatih Erbakan to his electoral bloc (Çevik 2020b). Nevertheless, Turkish nationalism is still rewarding for Erdoğan, since divergence around the Kurdish Question still impedes the formation of an alternative equivalential chain by the opposition. Overall, the AKP's constant shape-shifting -from a vanguard of the Eu membership process to anti-Western nationalism -should not necessarily be thought of in its ideological pursuits, but its populist calculations, depending on the changing circumstances. This tide of continual change in foreign policy is not an anomaly, but an intrinsic element of populism.
suggests. The divergence mainly stems from different conceptions of populism. The ideational approach sees anti-pluralism as part of populism, but the discourse-theoretic approach relates it to the attaching ideologies (Katsambekis 2022). | 2022-08-26T15:05:21.283Z | 2022-08-24T00:00:00.000 | {
"year": 2022,
"sha1": "2cf07546b36b600febda04ce6d9f7c43008bc4aa",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/01436597.2022.2108392?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "1f55f09758aa6e034fd2cff64b18d9260834a899",
"s2fieldsofstudy": [
"Sociology",
"Political Science"
],
"extfieldsofstudy": []
} |
225166452 | pes2o/s2orc | v3-fos-license | Representation and Reasoning of Three-Dimensional Spatial Relationships Based on R5DOS-Intersection Model Representation and Reasoning Based on R5DOS Model
This paper aims to disclose the compound topological and directional relationships of three simple regions in the three-di-mensional (3D) space. For this purpose, the directional model and the 8-intersection model were coupled into an R5DOS-intersection model and used to represent three simple regions in the 3D space. The matrices represented by the model were found to be complete and mutually exclusive. Then, a self-designed algorithm was adopted to solve the model, yielding 11,038 achievable topological and directional relationships. Compared with the minimum bounding rectangle (MBR) model, the proposed model boasts strong expressive power. Finally, our model was applied to derive the topological and directional relationships between simple regions A and C from the known relationships between simple regions A and B and those between B and C. Based on the results, a compound relationship reasoning table was established for A and C. The research results shed new light on the representation and reasoning of 3D spatial relationships.
Introduction
e reasoning of spatial relationship, a.k.a. spatial reasoning, can be implemented quantitatively or qualitatively. Qualitative spatial reasoning, aiming to represent and analyze spatial information, is an important tool in artificial intelligence (AI), machine vision, robot navigation [1,2], and geographic information system [3].
Over three decades, many theories and models have been developed for spatial reasoning. For instance, Randell et al. [4,5] put forward the region connection calculus (RCC) theory. Egenhofer and Franzosa [6,7] proposed the theory of 4-intersection model and 9-intersection model. Li [8] derived a dynamic reasoning method for azimuth relationship.
In recent years, spatial reasoning has evolved rapidly, thanks to the emerging AI applications in image processing [9,10], computer vision [11,12], and model prediction [13]. However, most studies on spatial reasoning focus on the spatial relationships on two-dimensional (2D) planes rather than those in three-dimensional (3D) spaces. e 3D space contains too many information elements to be handled by ordinary reasoning methods.
At present, the relationships between objects in the 3D space are mostly solved by compound reasoning. e common approaches of compound reasoning include the compound reasoning of directional and topological relationships [14,15] and the compound reasoning of directional and distance relationships [16]. Liu et al. [17] designed a 3D improved composite spatial relationship model (3D-ICSRM) in a large-scale environment and proposed a reasoning algorithm to solve that model. e accuracy of the 3D-ICSRM is very limited, and it considers the relationship between qualitative distance and direction. In 2016, Hou et al. [18] extended the convex tractable subalgebra into 3D space and used the BCD algorithm to calculate it. In 2019, Wang et al. [19] extended the oriented point relation algebra (OPRAm) model to 3D and proposed oriented point relation algebra in three-dimensional (OPRA3Dm) algorithm, which has certain practical significance. ese two papers consider the direction relationship. In recent years, the literature mainly studies the relationship between the direction and qualitative distance, while there is less research on the direction and topological relationship. is article will focus on the direction and topological relationship to fill the gaps in this field.
is paper aims to disclose the compound topological and directional relationships of three simple regions in the 3D space. Firstly, the RCC-5 model was combined with a strong directional relationship model for two simple regions, based on the extended 4-intersection theory and spatial orientation relationship in RCC5. e combined model was used to identify the compound topological and azimuth relationships between two simple regions, and solved by a self-designed algorithm. rough programming, a total of 65 topological and directional relationships were obtained in the 3D space.
On this basis, the extended 4-intersection matrix was replaced with an 8-intersection matrix to represent the 3D spatial topological and directional relationships between three simple regions. en, it was found that the topological and directional relationships between the R5DOS-intersection model of two regions and three regions are complete and mutually exclusive. Further programming reveals a total of 11,038 topological and azimuth relationships between three simple regions in the 3D space and derives a simple topological and directional relationship R (A, C) from two sets of two simple regions R (A, B) and R (B, C).
RCC eory.
In 1992, Randell et al. [4,5] proposed the RCC theory and established the RCC-8 intersection model, which is a boundary-sensitive model. Based on the boundary-sensitive conditions, the RCC-5 intersection model can be derived (Figure 1).
In 1991 and 1995, Egenhofer et al. constructed an extended 4-intersection matrix, which covers two space objects A and B, with A°being the interior of A: e value of each position set is either empty or nonempty. en, the five kinds of relationships in the RCC-5 intersection model can be represented as the matrix in Table 1 and expressed as a set R 5 � 0 1 1 1 , 1 1 1 1 , For three simple areas A, B, and C, R 2 − zA ∪ zB ∪ zC { } can be partitioned into 8 parts (Figure 2). e eight parts can be illustrated by an 8-intersection matrix: e RCC theory fuels the research on spatial relationship models in the past three decades, giving birth to many new theories. Nonetheless, most of these theories target the 2D plane rather than the 3D space. Recently, there is a growing interest in the spatial relationship models of the 3D space, especially the compound reasoning of directional and topological relationships, and that of directional and distance relationships.
e MBR model, 8-direction model, and 16-direction model are shown in Figure 3 below. e MBR model is not consistent with human cognition of directions.
In 2010, He and Bian [21] came up with a special 8direction cone model (Figure 4), which divides the space into eight regions: NW, NE, EN, ES, SE, SW, WS, and WN. Among them, NW and NE belong to the N direction, EN and ES belong to the E direction, SE and SW belong to the S direction, and WS and WN belong to the W direction. e 8-direction cone model is easy to describe and recognize and is flexible in dealing with relationships in multiple dimensions. Compared with the 8-direction cone model, the16-direction cone model is also consistent with the human cognition of directions, but too complicated to express. Hence, the 8-direction cone model is more suitable for the reasoning of spatial relationships.
Considering its excellence in spatial segmentation, the 8direction cone model was coupled with the RCC-5 intersection model for compound reasoning of topological and azimuth relationships in the 3D space.
Model Construction.
Any object in space is wrapped by an outer sphere ⊙A with a radius r A ( Figure 5), that is, Taking the center of ⊙A as the origin of the rectangular coordinate system in space, the spatial Cartesian coordinate system can be established and the reference space can be divided into eight intervals by the x-, y-, and z-axes. Each interval is called a hexagram limit Oct Figure 1: e relationships between RCC-8 and RCC-5 intersection models. Complexity 3 e outer sphere B completely covers the n points: ∀(x Bi , y Bi , z Bi ) ∈ ⊙ B. Similarly, the outer sphere C for point set C can be defined as follows: If it is impossible to find the outer sphere of the space object, the object can be treated as an irregular convex object.
en, five planes π 1 : y � 0, π 2 : x � 0, π 3 : z � 0, π 4 : y � z, and π 5 : y � −z, can be inserted into the rectangular coordinate system in space ( Figure 7). en, the 3D space can be represented as e angle corresponding to each region can be described as follows: where θ is the dihedral angle of the plane π i (i � 1, 2, 3, 4, 5). Adding the set of hexagram limits Oct � 1, 2, 3, 4, 5, 6, 7, 8 { }, the space can be divided into 16 regions: where DO is the set of 3D regions and their hexagram limits. If the center of outer sphere B exists in region 1NE, then B strongly exists in that region, denoted as s1NE. If outer sphere B partly exists in region 2NE, then B weakly exists in that region, denoted as w2NE. We let "0" indicate that there is no object in the area, "1" indicates that the object "strongly
Complexity
exists" in this area, and "2" indicates that the object "weakly exists" in this area. An example is shown in Figure 8: For simplicity, only strong existence scenarios were considered. en, the set of regions, where B strongly exists, can be defined as follows: Complexity where θ ob the dihedral angle formed by planes π ob and π 1 , which is perpendicular to the x-axis and passes the straight line ab (Figure 9). For two regions, the extended 4-intersection matrix can be introduced to the DOS: For three regions, the 8-intersection matrix can be introduced to the DOS: Our model consists of two layers: the first layer is the topological relationship R 5 layer, and the second layer is the orientation relationship DOS layer. en, the following definition can be derived. Suppose For any two simple regions A and B, it is possible to obtain a 5 × 4'0-1 matrix. In theory, a total of 2 20 matrices could be acquired, which correspond to 2 20 topological and directional relationships in the 3D space: 6 Complexity Based on the topological relationship between outer spheres B and C, the existence of the centers of the two spheres can be described in two cases.
According to the above conditions, 2 8 × 3 16 matrices could be obtained theoretically, which correspond to 2 8 × 3 16 topological and directional relationships in the 3D space.
Model Properties
Definition 2. In layer R 5 , any m × n -order 0-1 matrices A � (a ij ) m×n and B � (b ij ) m×n can be defined as A ∪ B � (a ij ∨b ij ) m×n . en, a 0-1 diagonal matrix can be established as Table 2.
e following proposition can be derived from Table 2:
}, R (A, B) is the element that corresponds to the topological relationship R 5 between any two simple regions A and B.
en, the following theorem can be obtained. en, it is assumed that the topological relationship between A, B, and C corresponds to two matrices R5 3 DOSa and R5 3 DOSb and can be induced by the R5 3 DOS-intersection model. en, there exists 1 ≤ i ≤ 24 such that R5 3 DO S a i ≠ R5 3 DO S b i . If i � 1, R5 3 DOSa i � 0 and R5 3 DOSb i � 1, A°∩ B°∩ C°i s both empty and nonempty, which is obviously contradictory. Hence, the above theorem was proved valid.
Constraints on ree Simple
Regions. e following constraints were designed on three simple regions. Constraint 1: to uniquely correspond to the topological and directional relationships in the 3D space, a R5 3 DOS matrix must satisfy the following conditions.
Case 2: if any two of the three simple regions are equal, the ternary region can be regarded as a binary region with only one 1 in the DOS layer. Case 3: if any two of the three simple regions are inclusive or noninclusive, the ternary region can be regarded as a binary region when any two regions intersect and the sum of layer R 5 is 4. Case 4: if only one of the three simple regions is inclusive or noninclusive, the ternary region can be regarded as a binary region when any two regions intersect and the sum of layer R 5 is 5. Case 5: if the three simple regions are disjoint, the ternary region can be regarded as a binary region when any two regions intersect and the sum of layer R 5 is 5. Case 6: if simple regions B and C are inclusive or noninclusive and separated from A, then the center of the A can only fall within B and C: For a ternary reference object in the 3D space, there are theoretically 2 8 × 3 16 matrices. Under the above constraints, a total of 11,038 matrices were obtained after removing the nonexistent scenarios.
Topological Relationship Algorithm for 3 Simple Regions in the 3D Space.
e topological relationship algorithm for 3 simple regions in the 3D space can be implemented in the following steps.
Step 2: scan each row of matrix A, and mark all row vectors that satisfy the constraints.
Step 3: save all the marked row vectors as a matrix B and output the matrix as the final result. e pseudocode of the algorithm is displayed as follows. Topological and directional relationship: Gen (null; R5 3 DOSa)//Input: null; output: topological relationship satisfying constraints (Algorithm 1).
Comparison between R5 3 DOS-Intersection Model and MBR Model.
is section proves that the R5 3 DOS-intersection model has stronger expressive power than the MBR model in the 3D space [21][22][23].
First, layer R 5 was defined as R (A, B) � PPI, R (A, C) � PPI, and R (B, C) � PPI, and the center of outer sphere B was assumed to fall into 1NE or 2NE. is situation does not exist in the real world. Under Constraints 2 and 4, there is no solution to this situation. However, the R5 3 DOS-intersection model can explain the situation that cannot be realized in the 3D space.
Next, the R5 3 DOS-intersection model was found capable of expressing situation that cannot be illustrated by the MBR model through the analysis of the following example. For any three external spheres A-C in the 3D space, it is assumed that the topological and azimuth relationships between them are known, and these spheres are separated from each other.
(1) R5 3 DOSaALL ⟵ 2 8 * 3 16 (9) end if (10) end for (11) return R5 en, Without changing the positions of A-C, the images of the R5 3 DOS-intersection model in the two examples can be obtained as Figures 13 and 14, where green, blue, and red balls are the outer spheres A-C, respectively. (Figures 15 and 16).
In the same way, we can get the corresponding R 5 3DOSintersection model (Figures 17 and 18): rough the above comparison, it can be seen that the R 5 3DOS-intersection model can represent the topological relationship of space objects A, B, and C, and it can accurately represent the spatial situation that the MBR model cannot represent.
Comnd Relationship Reasoning Based on R5 3 DOS-Intersection Model.
is section applies the R5 3 DOS-Intersection Model to the reasoning of the compound relationships between simple regions in the 3D space. It is assumed that the topological and azimuth relationships between simple regions A and B and those between simple regions B and C are known in advance. en, the goal is to deduce the possible topological and azimuth relationships between simple regions A and C.
According to Section 2.3, we have Using the R5 3 DOS-intersection model, a total of 65 topological and azimuth relationships were obtained from the real world. Hence, it is possible to obtain 65 0-1 matrices of 5 rows and 4 columns, which is denoted as Ω 1 � R i ; i � 1, . . . , 65 R i . Targeting at region A, the topological and directional relationships between A and C and those between B and C were taken into account.
Since the topological and azimuth relationships between simple regions A and B and those between simple regions B and C are known in advance, we have R(B, C) ∈ Ω 1 . en, the possible topological and orientation relationships between A and C were derived from the R5 3 DOS-intersection model. According to Definition 2, we have Table 3: e list of all directional and topological relationships.
Supplementary Materials
is code is the screening algorithm of the R5DOS-intersection model.
e purpose is to screen several matrices theoretically in the model according to the constraints and finally get the algorithm of the matrix that meets the requirements, the result of running the code needs simple processing, not the result of the article. e code is developed based on MATLAB software. (Supplementary Materials) | 2020-10-28T18:54:33.108Z | 2020-10-06T00:00:00.000 | {
"year": 2020,
"sha1": "0e1009ee0754e3be185e9d068074851a0e88a703",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/complexity/2020/3849053.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "cf57972c3b72ff0e9cd766ae98664158c081ddf3",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
241379460 | pes2o/s2orc | v3-fos-license | Membrane depolarization kills dormant Bacillus subtilis cells by generating a lethal dose of ROS
The bactericidal activity of several antibiotics partially relies on the production of reactive oxygen species (ROS), which is generally linked to enhanced respiration and requires the Fenton reaction. Bacterial persister cells, an important cause of recurring infections, are tolerant to these antibiotics because they are in a dormant state. Here, we use Bacillus subtilis cells in stationary phase, as a model system of dormant cells, to show that pharmacological induction of membrane depolarization enhances the antibiotics’ bactericidal activity and also leads to ROS production. However, in contrast to previous studies, this results primarily in production of superoxide radicals and does not require the Fenton reaction. Genetic analyzes indicate that Rieske factor QcrA, the iron-sulfur subunit of respiratory complex III, seems to be a primary source of superoxide radicals. Interestingly, the membrane distribution of QcrA changes upon membrane depolarization, suggesting a dissociation of complex III. Thus, our data reveal an alternative mechanism by which antibiotics can cause lethal ROS levels, and may partially explain why membrane-targeting antibiotics are effective in eliminating persisters.
Introduction
Many commonly used antibiotics that interfere with protein synthesis, such as kanamycin and gentamicin, or cell wall synthesis, such as vancomycin and ampicillin, trigger the production of lethal levels of reactive oxygen species (ROS), which contributes to the bactericidal activity of these antibiotics [1][2][3] .The produced ROS constitute primarily of hydroxyl radicals generated by the Fenton reaction.The free ferrous iron required for this reaction is assumed to originate from iron-sulfur proteins by a yet not fully understood mechanism [1][2][3] .Despite this double mode of action, bacterial cells can tolerate these antibiotics by shutting down biosynthetic processes and entering a physiologically dormant state 4,5 .This so called 'persister' state can be caused by activation of toxin-antitoxin systems 5,6 , the stringent response 7 , SOS response 8,9 or simple nutrient starvation 10 .Persister cells can resume growth when the antibiotic treatment ceases 5 , and are therefore believed to be an important source of recurrent infections 11,12 .Therapeutic strategies for eradicating persister cells are in dire need.
We previously showed that the membrane potential is essential for the cellular localization of different bacterial peripheral membrane protein, such as the conserved cell division proteins FtsA and MinD 13 .
Persister cells do not grow, but it is likely that they maintain a membrane potential.Presumably, dissipation of the membrane potential will also result in the delocalization of membrane-associated proteins in persister cells, which could affect their viability.We have investigated this possibility by using dormant, antibiotic-tolerant, Bacillus subtilis cells as a simple model system for persisters.Surprisingly, we found that dissipation of the membrane potential generates lethal levels of reactive oxygen species (ROS).This nding was counterintuitive since ROS production is generally linked to enhanced respiration 14 , and in fact membrane depolarization has been shown to reduce electron transfer in the respiratory chain of B. subtilis 15 .Moreover, we found that the Fenton reaction was not involved and that especially superoxide radicals were formed.By means of genetic analysis, we could pinpoint one of the sources of ROS to the conserved iron-sulphur cluster protein QcrA of respiratory complex III, also known as Rieske factor 16 .Interestingly, microscopic analysis showed that QcrA delocalizes after membrane depolarization, possibly indicating that detachment of QcrA from complex III leads to superoxide radical formation.This novel killing mechanism may explain why membrane targeting compounds are successful in eradicating antibiotic tolerant persister cells [17][18][19][20][21]
Antibiotic tolerant B. subtilis cells
The antibiotic tolerance of persisters ensues from the fact that these cells are in a dormant state 5,6 .It has been shown that Escherichia coli cells in the stationary growth phase display an antibiotic tolerance that is reminiscent of persister cells 22 .To examine whether the same is true for B. subtilis, we used a sporulation de cient mutant ∆spoIIE, since B. subtilis spores are insensitive to antibiotics and, thus, would confound the measurements.The ∆spoIIE deletion mutant was grown overnight (18 h) in LB medium to stationary phase, and subsequently treated with 10 x minimal inhibitory concentrations of either vancomycin, kanamycin or cipro oxacin for 10 h (see Table S4 for MIC).Samples for CFU measurements were taken every 2 h, and an exponentially growing culture was used as non-dormant comparison.As shown in Fig. 1, the latter culture was sensitive for all three antibiotics whereas the overnight culture was not, indicating that non-sporulating stationary phase B. subtilis cells can be used as a simple model system for antibiotic tolerant persister cells.
These stationary cells maintained membrane potential levels comparable to actively growing cells (Fig. S1).Many peripheral membrane proteins use an amphipathic alpha helix domain as a reversible membrane anchor and we have shown that the binding of such an anchor domain is strongly stimulated by the membrane potential 13 .Depolarization of the membrane potential might therefore disturb the normal localization of different peripheral membrane proteins, which, in turn, could affect the viability of persister cells.To test whether dormant B. subtilis cells are sensitive for membrane depolarization, we exposed them to the potassium ionophore valinomycin, which speci cally dissipates the transmembrane potential 23,24 (Fig. S1).To assure full dissipation over a 10 h incubation period, we used 100 μM valinomycin which is 10 x the MIC for exponentially growing cells (Table S4).The addition of valinomycin reduced the viable count of exponentially growing cells by 90 % after 2 h incubation (Fig. 1c).The dormant culture showed some degree of resilience compared to its actively growing counterparts, but after some time the viability decreased and after 6 h approximately 90 % of the cells were killed (Fig. 1d), suggesting that membrane depolarizing compounds might indeed be useful to combat persisters.
Cause of killing
The killing kinetic of valinomycin is quite different between exponentially growing and dormant cells, and in the latter case showed a clear acceleration over time (Fig. 1c & 1d).This made us wonder what the actual cause is why dormant cells eventually succumb when their membrane is depolarized.It has been shown that depolarization of the B. subtilis cell membrane results in an uncontrolled autolysin activity and cell lysis 25 .To determine whether this could be responsible for the killing of dormant cells, we exposed a mutant lacking all major autolysins (∆lytA, ∆lytB, ∆lytC, ∆lytD, ∆lytE and ∆lytF) to valinomycin.However, this deletion mutant did not show an increase in viability (Fig. 2a).E. coli persisters can be killed by the induction of cryptic prophages 26 .However, a B. subtilis strain cured of all prophages exhibited a similar sensitivity for valinomycin as the wild type strain (Fig. 2b), indicating that the reduction in viable count is also not related to prophage activation.
During the viable count measurements we noticed that after prolonged valinomycin treatment the colony size started to vary and smaller colonies emerged (Fig. 2c).Such variation in colony size is also observed when bacterial cells are treated with DNA damaging agents 27 , suggesting that membrane depolarization might cause DNA damage.If this is the case, then a DNA repair mutant, such as a ∆recA mutant, should be more sensitive to valinomycin.This was indeed the case (Fig. 2d).
Generating ROS
A common cause of DNA damage is the accumulation of reactive oxygen species (ROS) in the cell.However, it seems unlikely that dormant B. subtilis cells would produce ROS upon membrane depolarization, since the accumulation of ROS is normally associated with hyper respiration, which can be prevented by membrane potential dissipating agents 15,28,29 .Moreover, antibiotics that have been shown to generate ROS, including nor oxacin, vancomycin and kanamycin [1][2][3] , are not active in dormant cells.ROS is primarily a by-product of aerobic respiration and under anaerobic conditions ROS levels are generally much lower 30 .When dormant B. subtilis cells were placed into an anaerobic chamber for 10 h only a fraction of cells survived (Fig. 3a).Although B. subtilis can grow anaerobically, it requires certain medium conditions and su cient time to adapt 31 , which likely explains this reduction in viability.
Nevertheless, the fraction of surviving cells was still smaller when valinomycin was added (Fig. 3a).
Since the viability of B. subtilis is greatly affected by anaerobic conditions, even in the absence of valinomycin, this experiment did not reveal whether the DNA damage caused by valinomycin treatment was due to the accumulation of ROS.Therefore, we took another approach and tested the sensitivity of a spx deletion mutant.Spx is the key regulator of the oxidative stress response in B. subtilis and is required for survival in the presence of strong ROS-inducing compounds such as paraquat 32 .Interestingly, a ∆spx deletion mutant was even more sensitive to valinomycin than the ∆recA mutant (Fig. 3b), supporting the idea that ROS is a key contributor to the killing of depolarized cells.Finally, to directly show that ROS was generated, we used the speci c uorescent ROS probe 2',7'-dichlorodihydro uorescein diacetate (H2DCFDA) 3 .Dormant B. subtilis cells were incubated with H2DCFDA and exposed to either valinomycin or the well-known ROS inducer paraquat for 2 and 4 h 33 .The increase in uorescence in cells was measured using uorescence light microscopy.As shown in Fig. 3c, incubation with valinomycin for 2 h resulted in a strong increase in ROS that remained high over a 4 h period and even exceeded the effect of 1 mM paraquat.Thus, membrane depolarization of dormant B. subtilis cells does lead to the accumulation of lethal levels of ROS.
Determining ROS type
Hydroxyl radicals ( • OH) and superoxide radicals (O 2 •-) are the main reactive oxygen species formed during aerobic growth 34 .Incubation with antibiotics like nor oxacin, ampicillin and kanamycin results primarily in the formation of hydroxyl radicals made from hydrogen peroxide by the Fenton reaction [1][2][3] .This conclusion was based on the fact that Fe 2+ -speci c chelators like ferrozine or bipyridyl and the hydroxyl radical scavenger thiourea reduced the formation of radicals and diminished the killing e ciency of these antibiotics [1][2][3] .Moreover, increased levels of catalase, which removes hydrogen peroxide 35 also suppressed the killing by either ampicillin, gentamicin or nor oxacin 3 .Interestingly, the addition of 0.5 mM ferrozine had no effect on the viability of dormant cells when incubated with valinomycin, and 0.5 mM bipyridyl actually reduced the viability (Fig. 4a).Moreover, the presence of 150 mM thiourea also did not mitigate the effect of valinomycin and in fact enhanced its activity (Fig. 4b).In contrast to this, the presence of 10 mM tiron, a superoxide scavenger, almost completely inhibited the killing by valinomycin (Fig. 4b).Moreover, inactivation of KatA, the main catalase of B. subtilis 36 , had no effect on the killing e ciency of valinomycin (Fig. 4c), whereas deleting sodA, encoding the main superoxide dismutase, strongly reduces the viable count upon valinomycin treatment (Fig. 4c).Indeed, ΔsodA cells show almost a doubling in uorescence of the ROS probe H2DCFDA compared to wild type cells (Fig. 4d).Apparently, depolarization of dormant B. subtilis cells triggers the production of lethal levels of superoxide radicals, without involvement of the Fenton reaction, which is different from the ROS generation caused by antibiotics like nor oxacin, vancomycin and kanamycin.
Possible source of superoxide production
The accumulation of lethal levels of ROS upon exposure to bactericidal antibiotics is believed to be triggered by a surge in NADH consumption that induces a burst in superoxide generation via the respiratory chain.These superoxide radicals then destabilize iron-sulfur clusters, leading to free ferrous iron which enables the Fenton reaction, thus creating deadly levels of hydroxyl radicals [1][2][3] .The TCA cycle is the main source of NADH (Fig. 5a), and it was shown that inactivation of either isocitrate dehydrogenase or aconitase reduces the killing activity of nor oxacin, vancomycin and kanamycin 1 .
When we inactivated B. subtilis pyruvate dehydrogenase, which fuels the TCA cycle, cells became not less but more sensitive to valinomycin (Fig. 5b, ∆pdhB), suggesting that there is no surge in NADH levels upon membrane depolarization.The other well-known source of ROS is the electron transport chain [37][38][39][40] .B. subtilis contains a classic electron transport chain composed of NADH dehydrogenase, succinate dehydrogenase, cytochrome bc complex, cytochrome c oxidase, also known as complex I, II, III and IV, respectively (Fig. 5a).Complex I and II feed electrons to the menaquinone pool.Inactivating one of them, by either deleting ndh, ndhF or sdhC, did not mitigate the killing by valinomycin but made it worse (Fig. 5b).Inactivation of glycerol-3-phosphate dehydrogenase, which reduces glycerol-3-phosphate to dihydroxyacetone phosphate using menaquinone 41 , had no effect on the viability count (Fig. 5b, ∆glpD).
The absence of either the cytochrome b subunit (∆qcrB) or the cytochrome b/c subunit (∆qcrC) from complex III also strongly reduced the survival chance of valinomycin treated cells (Fig. 5c).Interestingly, when we deleted qcrA, encoding the Rieske-type iron-sulfur subunit of complex III, cells became more resilient to valinomycin, and after 10 h the viable count was more than 20-fold higher compared to wild type cells (Fig. 5c).
Impairing the expression of one of the cytochrome-c oxidase subunits of complex IV, by deleting either ctaC, ctaD, ctaE or ctaF, made dormant cells slightly more sensitive for membrane depolarization (Fig. 5d).B. subtilis contains two cytochrome-c electron carriers, cytochrome c550, which contains a transmembrane anchor, and cytochrome c551 that is anchored to the cell membrane via a diacyl glycerol tail (Fig. 5a) 42 .Deleting one of them did not mitigate the killing by valinomycin (Fig. 5e, ∆cccA or ∆cccB).Inactivating both, by removing enzymes involved in their biogenesis (∆ccdA, ∆resA), strongly increased the sensitivity to valinomycin (Fig. 5e).B. subtilis possesses two alternative cytochrome oxidases that use quinol for their oxidation reaction, cytochrome bd oxidase and cytochrome aa3 oxidase (Fig. 5a).Inactivation one of them, by deleting either qoxB or cydA, considerably reduced the viability of dormant cells when incubated with valinomycin (Fig. 5f).
These surprising results suggest that not so much an active but rather an intact electron transport chain is important for the survival of dormant B. subtilis cells upon membrane depolarization, and that QcrA is a source of superoxide radicals.This Rieske protein contains a unique 2Fe-2S cluster in which one of the two iron atoms is held in place by two histidines rather than two cysteines.This cluster accepts an electron from the quinol anion and transfers it to the cytochrome heme iron 16,43,44 , and in fact this step is a well-known source of superoxide radicals in mitochondria 45 .To provide further support that QcrA is a likely source of the observed ROS when the membrane potential is dissipated, we deleted qcrA in the ΔrecA background strain, which has an impaired DNA repair system and has been shown to be especially sensitive for valinomycin (Fig. 2d).As shown in Fig. 5g, the absence of QcrA clearly attenuated the valinomycin susceptibility of dormant ΔrecA cells.Finally, we directly measured ROS production in the ∆qcrA mutant using the uorescent probe H2DCFDA (Fig, 5h).Indeed, in the absence of the Rieske protein the average uorescence signal after 4 h valinomycin treatment was considerably lower.
Cellular distribution of QcrA
It is di cult to understand how QcrA can produce lethal levels of superoxide radicals upon membrane depolarization.As mentioned in the introduction, depolarization of the cell membrane leads to the delocalization of different membrane proteins, including some transmembrane proteins 13,46 .Possibly, the localization of electron transport chain components is also affected.To examine this, we constructed GFP fusions with the transmembrane proteins QcrA, QcrB and QcrC of complex III, CtaC and CtaD of complex IV, and the main cytochrome C (CccA), and expressed these fusions from an ectopic locus in the genome.Of note, we have not checked whether the GFP fusions in uenced their enzymatic activities, since we were only interested in their localization.As shown in Fig. 6a, all dormant cells showed a more or less uniform uorescent membrane stain.Interestingly, incubation with valinomycin caused a strong clustering of GFP-QcrA over time, whereas the localization of the other fusions were unaffected.When we repeated the experiment with exponentially growing cells, the clustering of GFP-QcrA was already visible within 10 min after the addition of valinomycin, whereas the other fusions showed no delocalization, also not after 30 min (Fig. 6b).Possibly, the distinct clustering of QcrA upon membrane depolarization is somehow responsible for the production of superoxide radicals.
Discussion
In this study we provide evidence that QcrA, the Rieske protein subunit of complex III, produces lethal levels of superoxide radicals upon depolarization of the membrane of dormant B. subtilis cells.Depolarization changes the distribution of QcrA in the cell membrane from smooth to spotty, which is not the case for the other two subunits QcrB and QcrC.This seems to indicate a detachment of QcrA from the other subunits of complex III.We speculate that this hampers electron transfer from the iron-sulfur cluster of QcrA to the heme of QcrB, and/or exposes the iron-sulfur cluster to molecular oxygen, thereby facilitating superoxide radical formation.This might also be a reason why the ∆qcrB and ∆qcrC mutants are extra sensitive for valinomycin (Fig. 5).
How depolarization of the membrane would lead to the disintegration of complex III is unclear.However, in previous work we have shown that the membrane potential is important for the regular distribution of different membrane proteins in B. subtilis, likely related to the distribution of lipids with short, unsaturated or branched fatty acids that stimulate a more liquid disordered phase in the membrane 46 .A comparable phenomenon has been observed in yeast where the distribution of speci c transmembrane proton symporters is disturbed when the membrane potential is neutralized 47 , and also in mammalian cells, where membrane depolarization affects the clustering of the GTPase Ras-K 48 .These effects have been attributed to membrane potential dependent clustering of speci c lipids [47][48][49] .Since a lipid membrane behaves like a capacitor, a voltage difference over the membrane will apply a compression force (Maxwell pressure), which is also referred to as electrostriction.It is assumed that this compression affects the packing of fatty acid chains [50][51][52] .Possibly, the change in bilayer packing upon depolarization triggers the dissociation of QcrA from QcrBC.This dissociation might also explain why menaquinonedependent electron transfer in the B. subtilis respiratory chain is facilitated by membrane energization 15 .
Mitochondrial Rieske protein
The mitochondrial Rieske iron-sulfur protein is one of the key sources of ROS in eukaryotic cells.High respiration levels increases ROS production to dangerous levels.On the other hand hypoxic/anoxic conditions can also trigger superoxide production by this protein 53 .The mechanism by which superoxide production is produced by hypoxia/anoxia is not clear 54 , but these low oxygen conditions can lead to a reduction of the mitochondrial membrane potential 55,56 .Interestingly, the bacterium Paracoccus denitri cans, which contains a respiratory chain similar to that of eukaryotic mitochondria, shows an increase in ROS production in response to hypoxic conditions, as well 57 .Based on these and our data, it might be interesting to examine whether the mitochondrial complex III also dissociates upon membrane depolarization, thereby exposing the Rieske subunit.
Role of the electron transport chain
It is assumed that the production of superoxide by the Riekse subunit occurs when the oxidized ironsulfur cluster withdraws one electron from reduced menaquinone (menaquinol) 58 .Menaquinol is provided by either NADH dehydrogenase (complex I), succinate dehydrogenase (complex II) or glycerol-3phosphate dehydrogenase.Inactivation of one of these protein complexes did not mitigate the effect of valinomycin, possibly because of their redundant activities.However, inactivation of pyruvate dehydrogenase, which will lower the production of NADH by the TCA cycle, also did not mitigate the effect of membrane depolarization.On the other hand, mutations that reduce the oxidation of the menaquinol pool, such as the inactivation of the two cytochrome Cs, and the alternative quinol oxidases cytochrome aa3 and bd oxidases, strongly diminishes the viability of cells exposed to valinomycin.Possibly, this raises the menaquinol levels and increases the production of superoxide radicals by the Rieske subunit.
ROS and antibiotics
Whether the production of ROS by bactericidal antibiotics contributes to their effectivity has been hotly debated when this phenomenon was rst proposed in 2007 1,3,59−61 .Our data supports the nding that antibiotics can induce lethal levels of ROS.However, the mechanism of ROS production that we found differs from what has been previously described.Bactericidal antibiotics produce hydroxyl radicals from endogenous hydrogen peroxide using the Fenton reaction.The free ferrous iron required for this reaction is assumed to originate from iron-sulfur proteins, whom iron-sulfur clusters have been compromised by superoxide radicals generated by the electron transport chain that has become highly active due to a surge in NADH 62 .How this surge happens is still not entirely clear.It has been shown that in E. coli the two-component regulatory systems CpxRA and ArcAB are required for the induction of ROS by antibiotics.
CpxRA senses and responds to aggregated and misfolded proteins in the bacterial cell envelope, and the ArcAB system is involved in sensing oxygen availability and the concomitant transcriptional regulation of oxidative and fermentative catabolism 63 .Based on work with the ribosome inhibitor gentamycin it was proposed that mistranslation and misfolded membrane or periplasmic proteins activate the sensor CpxA, which, by an unknown mechanism, activates ArcA, as a result of which the expression of a large number of metabolic and respiratory genes are activated, leading to increased respiration rates and ultimately ROS production 1 .Interestingly, accumulation of misfolded cell envelop proteins by gentamycin also affects the E. coli membrane potential 1 .Whether a reduction of the membrane potential contributes to ROS formation in E. coli remains to be investigated.If this turns out to be the case, there is at least no Rieske protein containing complex III involved, since E. coli lacks this complex.Nevertheless, dormant B. subtilis mutants that lack QcrA still die upon membrane depolarization, albeit slower (Fig. 5c), and this might still involve ROS as the inactivation of the RecA-based DNA repair system renders ∆qcrA cells more sensitive towards valinomycin (compare Fig. 5c and 5g).Possibly, a change in lipid bilayer packing due to membrane potential dissipation also affects the assembly and/or proper membrane embedding of other electron transport chain components.The resulting ine cient electron transfer between components, possibly combined with an increased exposure of their Fe-containing prosthetic groups to oxygen, might then stimulate the production of ROS.
Relevance for persisters B. subtilis is not a human pathogen, so the question arises how relevant our ndings are for the ght against clinically relevant antibiotic tolerant persisters.Many pathogens, including E. coli, contain a noncanonical electron transport chain and lack complex III.However, depolarization of the cell membrane has multiple effects, including reducing ATP levels, delocalization of peripheral as well as transmembrane proteins, and possible ROS produced by the non-canonical electron transport chain component as described above.All these factors will affect the tness of persister cells.A pathogen and well known persister that lacks complex III is Staphylococcus aureus.Dormant S. aureus cells are considerably more resistant to valinomycin than B. subtilis (Fig. S4), however when these S. aureus cells lacked one or both of its main superoxide dismutases they became substantially more sensitive for incubation with valinomycin (Fig. S4).These data support the notion that membrane depolarization can also results in the accumulation of lethal levels of superoxide radicals when no Rieske protein-containing complex III is present.
A notorious example of a pathogenic persister that does contain the Rieske protein subunit is Mycobacterium tuberculosis.It has been shown that the membrane potential is essential to maintain viability of non-growing mycobacterial cells 20 .Interestingly, it was found that resistance of M. tuberculosis against membrane targeting lipophilic quinazoline derivatives can arise by mutations in QcrA 64 .Both the lethal effect of membrane depolarization as well as the exposure to these quinazoline derivatives were attributed to ATP depletion.However, our results suggests that induction of superoxide radicals might be a more important factor contributing to the lethality of these interventions.In conclusion, membrane depolarization seems to be a promising method to kill bacterial persister cells since it leads to the production of lethal levels of ROS.
Membrane depolarization by the potassium ionophore valinomycin requires the presence of su cient potassium ions in the medium.Therefore, cells were grown in a modi ed LB medium composed of 10 g/l tryptone, 5 g/l yeast extract, 50 mM Hepes pH 7.5, and 300 mM KCl, here referred to as 'KCl-LB' medium 24 .
Strain construction
Strains used in this study are listed in Table S1, and the plasmids and oligonucleotides used in this study are listed in tables Table S2 and S3, respectively.B. subtilis strain construction was performed according to established methods 66 .For the labelling of ETC components with GFP, both N-as well as a C-terminal monomeric superfolder GFP (msfGFP) fusions were constructed, since it is possible that the GFP moiety interferes with membrane insertion.After microscopic inspection it appeared that only the N-terminal msfGFP fusions of QcrA, QcrB, QcrC, CccA, CtaC and CtaD showed a clear membrane signal, indicating proper membrane insertion.These N-terminal msfGFP fusions were used for the uorescence microscopy experiments.The xylose-inducible msfGFP fusions were constructed as follows.Target genes were PCR ampli ed using forward and reverse primer pairs with genomic DNA of strain BSB1 as the template.The amyE-integration vector part with the xylose-inducible promoter and msfGFP sequences was ampli ed from pHJS105 using primer pairs EKP30 & EPK22 and MW226 & EKP36, for the N-and for C-terminal fusion, respectively.Target genes and vector amplicons were puri ed and assembled using two-fragment Gibson assembly.All plasmids were sequenced to con rm the constructs.Subsequently, the plasmids were transformed into PG344 strain resulting in double-crossover integrations, positioning the fusions in the amyE locus.Mutants were veri ed by PCR with primers TerS350 and TerS351 and by sequencing.
Minimum inhibitory concentration (MIC) measurement
The MIC values of B. subtilis strain PG344 was determined according to a standard protocol 67 .Overnight B. subtilis PG344 cells were diluted in fresh medium and grown to exponential phase.Cells were then diluted to 1 × 10 5 cells/ml in standard LB or KCl-LB supplemented with increasing concentrations of vancomycin, cipro oxacin, kanamycin or valinomycin.The lowest concentration inhibiting visible growth after 18 h incubation at 37°C was taken as the MIC value.
Antibiotic susceptibility tests
For antibiotic susceptibility test of vancomycin, cipro oxacin and kanamycin, cells were cultured in LB medium at 37°C with aeration whereas for valinomycin cells were cultured in KCl-LB medium.The exponential phase or overnight dormant cells were incubated with 20 µg/ml vancomycin, 2 µg/ml cipro oxacin, 10 µg/ml kanamycin or 100 µM valinomycin.The antibiotic concentration we used is 10fold in excess of the MIC to ensure stability over the 10 h treatment period. 1 % DMSO was used as a negative control.Supplements were added as required: 10 mM tiron, 150 mM thiourea, 500 µM ferrozine and 500 µM 2,2'-bipyridyl.Cultures were incubated for 10 h at 37°C with aeration, and samples were taken every 2 h in order to determine the CFU through serial dilutions and plating on LB agar.Plates were incubated overnight and colonies were counted the following morning.
Membrane potential determination
The membrane potential levels of B. subtilis cells were determined using the voltage-sensitive dye 3,3′-Dipropylthiadicarbocyanine iodide [DiSC 3 (5)] (Sigma-Aldrich) in a uorescent microplate reader (Biotek Synergy) 24 .Brie y, early-mid exponential growth phase cells (OD 600 = ~ 0.4) or stationary growth phase cells (overnight culture) were diluted in medium supplemented with 50 µg/ml BSA to an OD 600 of 0.2 and then incubated with 1 µM DiSC 3 (5) with shaking.The uorescent base line, while the dye reached a stable accumulation in cells, was recorded every 42 sec with 651 nm excitation and 675 nm emission lters.After 3 min, valinomycin was added to the cells to a nal concentration of 10 µM, and 1 % DMSO was added as negative control.Of note, we used a ten-times lower valinomycin concentration in this assay compared with the concentration used in the experiments because at high concentrations valinomycin affects the uorescence of DiSC 3 (5).After addition of antibiotics, the uorescent signal was recorded for another 30 min.
Generation of anaerobic conditions
Strains were cultured in KCl-LB overnight at 37°C with aeration.Following incubation, cells were mixed with either 100 µM valinomycin or 1 % DMSO.Cultures were then placed into an anaerobic chamber and oxygen was removed using AnaeroGen 2.5L sachets.Cultures were incubated for 10 h under anaerobic conditions.The CFU was determined prior to incubation and after the 10 h period through serial dilutions and plating on nutrient agar plates.
Microscopy
Fluorescence microscopy experiments were performed using a Nikon Eclipse Ti equipped with a CFI Plan Intensilight HG 130-W lamp, and images were acquired by a C11440-22CU Hamamatsu ORCA camera with NIS-Elements software version 4.20.01.Samples consisting of 0.3 µl of cells were spotted onto a microscope slide coated with 1 % agarose lm and sealed with a glass coverslip on top 68 .
The level of cellular ROS production was measured by using a cell-permeant 2',7'dichlorodihydro uorescein diacetate (H2DCFDA) probe (ThermoFisher) 3 .The H2DCFDA probe was dissolved in DMSO to make 10 mM stock aliquots and stored at -20°C under dry argon.Overnight cultures grown in KCl-LB were 10-fold diluted in the same medium and grown at 37°C for 2 h (OD 600 around 2.5).
The probe was added to a nal concentration of 10 µM, and the samples were protected from light and incubated for another 2 h (OD 600 around 4.0).Cells were then split and treated with either 100 µM valinomycin, 1 % DMSO as negative or 1 mM paraquat (Sigma) as positive control.For visualization of ROS-uorescence, cells of different treated time-points were quickly washed once, resuspended in PBS and imaged immediately.Images were analysed using Coli-Inspector which is a plugin for ImageJ (National Institutes of Health) 69,70 .
For visualization of electron transport chain proteins, overnight cultures of GFP fusion expressing cells were grown to mid-exponential phase in KCl-LB with 0.2 % xylose at 30°C.Samples were treated with either 100 µM valinomycin or 1 % DMSO and analysed using uorescence microscopy at different time points.Images were analysed using ImageJ 69 .
Figure 2 Possible
Figure 2
Figure 3 Generation
Figure 3
Figure 5 A
Figure 5 | 2021-09-25T16:02:30.146Z | 2021-08-24T00:00:00.000 | {
"year": 2024,
"sha1": "ee2ea8cd2f1f1e936107499fa323efc0d1f17ba8",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ef1736bb824e93104281611b4cae042e59a3f7e7",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
258496619 | pes2o/s2orc | v3-fos-license | Leadership Style of Student Production House Organization (Case Study on Label N)
The research is based on finding out how the leadership style of a production house founded by college students named Label N. This research uses a qualitative method with a case study approach. Qualitative research with a case study approach seeks to describe data in words or sentences separated by categories to obtain conclusions. Sources of data were collected through in-depth interviews regarding the leadership style applied by Label N. In the leadership style, Label N tries to establish a democratic leadership style. However, this democratic leadership style which is similar to the "four Likert system leadership style" in the fourth system, can run more smoothly if all members' views can be open in accepting each other's opinions so that when discussion no debate causes conflict. Members are also advised to keep the main focus in the discussion so as not to get out of the topic being discussed. In addition, because this leadership style is the “four Likert system leadership style" in the fourth system, we must be able to respect ideas that are data from other members, this is intended to keep things conducive to the ongoing discussions. This leadership style is also referred to as "participatory", as it allows active participation and contribution from all members in decision making
INTRODUCTION
Humans are social beings who cannot live alone and require the help of others in any case, especially in carrying out activities in their social environment.Every activity carried out by humans with other humans will involve communication (Rahmat, 2012), where there is an exchange of information or messages between the communicant and the communicator (Fiske, 2012).The message or information will be received directly or through the media (including the new media (Prayogi et al., 2020), when the message can be conveyed properly that is where there is effective communication, where the communication made has a reciprocal effect on the two communicators.There is no human being who will not be involved in communication if he lives in a social environment.The time spent in the communication process is 5% on writing, 10% on reading, 35% on speaking, and 50% on listening (Ardial, 2018).From this research, it can be concluded that communication is very important because whatever humans do every day, especially active in the social environment, will involve communication, either verbally or nonverbally.When communication goes well, indirectly, individuals with other individuals will build a relationship (Khomsahrial, 2014).Creating good relationships with others means improving our views, behavior, and attitudes toward one another in life together (Faules, 2015).Things that can help to make this happen include: deepening our awareness of socialization about the social environment in which we live and exist, starting from the initial social environment closest to us to the wider, Nation and State environment.Here the communication created will usually be more open, where one individual with another individual will know and know more quickly catch the messages conveyed by the communication opponent, when someone already knows more deeply about other people then a relationship or relationship will occur or when there is a group of people who have the same vision and mission, can be co-workers, close friends, romantic relationships, even organizations (Helmayuni, Totok Haryanto, Siti Marlida, Rino Febrianno Boer, Saktisyahputra, Aminol Rosid Abdullah, Ichsan Adil Prayogi, Angelika Rosma, Nadiah Abidin, 2022).In addition, it is also necessary to look at the concrete reality that exists in shared life, which is manifested in various forms of social interaction.
At this time the author will discuss the relationship created by communication and forming an organization.An organization can be stated as a social structure designed to coordinate the activities of two or more people through a division of labor, and a hierarchy of authority, to carry out the achievement of certain common goals (Ruliana, 2018).An organization is also has a definition as a unit formed by several people who have little or nothing in common about their backgrounds, identities, hopes, and various other things to achieve common goals together (Akbari & Pratomo, 2021).The elements of the organization consist of: (1) Man (people), in organizational or institutional life, often referred to as personnel, employees, or members.The personnel consists of members of the organization who according to their functions and levels consist of elements of leadership who are the highest leaders in the organizational structure and other divisions within the organization; (2) Cooperation, an act of cooperation that is carried out together to facilitate work and achieve common goals.The existence of cooperation between levels and divisions is referred to as human power (manpower) in the organization (Kartikawangi & Dahesihsari, 2020).
Organizational goals involve setting goals and guidelines for what the organization wants to do in the future.This includes figuring out what conditions the organization wants to always try to achieve, and then using social media to communicate these goals to everyone within the organization.(Ula, 2022).The goal is also a source of legitimacy that justifies every activity of the organization, as well as the existence of the organization itself (Ula, 2022).From some of the meanings that the author has described above, these sources come from experts and literature such as books and journals.So, Label N can also be said as one of the organizations because, in this production house, there are elements regarding the organization as the author has mentioned above.Label N is an organization or also called a production house (Ph) that has a contribution to making creative videos, which of course requires members who can realize organizational goals and carry out their duties and work, which is considered to be optimal and the performance is quite optimal, this is shown from the mingling between members and leaders, as well as cooperation between divisions, competence which is quite good than what the organization expects.Label N is a film production house that works in other entertainment fields, which was inaugurated on April 28, 2021.Initially, Label N was only a YouTube channel created for college assignments, but over time the founder or owner has the motivation to develop Label N more seriously.Finally, on April 28, 2021, Label N was officially established as a film and other entertainment production house until now.Because the founder of Label N took the initiative to commit to building this channel and wanted to create new works such as short films and content around entertainment shows with by utilizing existing channels, a hierarchical structure is formed where all members will have roles according to their performance and abilities.In addition to these reasons, Label N was also formed because of the selfmotivation of each member who is member of this production house to create a common goal that will be realized if there is a cooperation between one member and another.From this shared goal, Label N finally began to develop over time, from the beginning only as a task and then the publication of new ideas to create content or further works.This development is due to the many influencing factors of Label N members themselves, from the motivation of the members, the communication that is carried out, the organizational climate & how to manage a conflict, to the leadership style.
Management is responsible for creating an environment that supports productivity and creativity among employees.Creating an environment that is conducive to these goals is a challenge, but it is essential to the future of the organization.Organizational climate is important because it affects how people work together and how well an organization performs.Many experts in this field study the effects of organizational climate on organizational performance, and this research is published in scientific articles all over the world.Organizational climate has been studied in both business and non-business organizations.Communication experts are also interested in organizational climate, especially when it comes to industrial and organizational communication.The organization's climate is the way everyone in the organization perceives the environment around them.This affects how people feel and behave, and how well the organization performs.(Farida & Ganiem, 2017).The organizational climate is a kind of feeling or atmosphere that exists in the workplace.It affects how members of the organization behave and how they can help each other to be successful.This is important because it can help to improve the performance of the organization as a whole.. (Ruliana, 2018).
Leaders play an important role in setting the tone and culture of an organization by understanding what is important to the people working there.(Farida & Ganiem, 2017).The leader's behavior can affect how the entire organization feels, which can then affect how motivated the employees are.Employee motivation is the main factor in how well a company performs.It's based on reasons to encourager people to put their all into their work, and it comes from inside and outside of the company.(Farida & Ganiem, 2017).The performance of members is one of the main factors that affect the progress of the organization.The higher or better the performance, the more easily the organizational goals will be achieved, and vice versa if the employee's performance is low.Performance is something that cannot be separated from the organization.Performance is influenced by several factors including the work environment, competence, and organizational climate.The work environment has an important meaning in influencing performance.The analysis states that "the work environment is something that is around members and can affect them in carrying out their assigned tasks".Also, members must have the right qualifications in their work to realize the effectiveness and success of the work program in the long term.Improving the performance of individual members contributes to the performance of the human resource as a whole, which is expressed in increased productivity.In performance management, competence plays a role in the dimension of individual behavior in adapting well to work.The conditions and atmosphere of a good working environment can be created with good and proper organization (Ramadani, 2020).The analysis states that a good work atmosphere is produced mainly in wellorganized organizations, while a poor working atmosphere many is caused by an organization that is not well-organized as well (Ramadani, 2020).From some of these influences, this time the author is interested in reviewing the leadership style applied in Label N. Leadership style is measured by decision-making, leader behavior, and leadership orientation.This is done because every management needs to manage and know the performance of its employees, whether it is following the company's performance standards or not.By knowing the company's performance, it will be easier to find out how effective and successful employee development is.From the description above regarding leadership style, the author is interested in discussing the leadership style & organizational climate that is applied to Label N.
METHODS
This research used a qualitative method with a case study approach.Qualitative research with a case study approach seeks to describe data in words or sentences separated by categories to obtain conclusions (Moleong, 2017).Sources of data collected through in-depth interviews regarding the leadership style applied by Label N. Qualitative research interviews are different from other interviews.They have a specific purpose and are preceded by informal questions.Researchers want to learn what participants feel, think, and believe about something.Unstructured, nonstandardized, informal, or serious interviews are initiated from common questions in a wide area of research (Moleong, 2017).This interview is usually followed by a keyword, agenda, or list of topics to be covered in the interview.However, there were no predetermined questions except in very early interviews.In addition to interviews, researchers also conducted documentation to examine the data objectively and systematically (Moleong, 2017).The purpose is to collect the information that can be used to support the researcher's analysis and interpretation of the data
RESULT AND DISCUSSION
The two news texts about online sexual harassment experienced by some Ojol drivers were chosen purposively, because the two news manuscripts contain data and describe the phenomena that occur around the OGBV.Previously it was known that Viva.co.id and Detik.com are two online media institutions with a national scale in Indonesia that come from different corporations.Viva.co.i d is an online media channel under the auspices of PT.Viva Media Baru, while Detik.comwhich in fact is the pioneer of online news portals in Indonesia that has existed since 1998 is one part of the convergence of media owned by PT.Viva Media Baru.Trans Corporation, made by Chairul Tanjung.The difference in the background of the parent media company between the two gives the assumption that there are differences in the perspective and system of journalistic work carried out by the two media on the news they produce.
As for the issue of online sexual harassment of Ojol drivers published i n the news portals of the two media, both of them made almost the same news headline because they were both preceded by the word "Viral!".But the two headlines also haven't conveyed full information about the sexual harassment in question is online sexual harassment and not in-person sexual harassment.Judging from the publication time of the two news, although it was published on the same day, the news from Detik.com was published first at 10.51 WIB.Meanwhile, a similar news story made by Viva.co.id was published 5 hours later at 16.37 WIB.Based on the results of the framing analysis that has been carried out, it illustrates as follows: In this writing, the author interviewed one of the speakers, namely the founder of Label N itself, and participants 1 and participant 2 who are members of Label N. The interview was conducted on January 28, 2022.The author asked several questions about Label N and the Leadership Style applied to Label N. In the first interview with the resource person, the founder of Label N real name Muhammad Edo Saputra said that Label N is a small production house that he created to make a place for creative people who want to learn together and develop together for a common goal.Because he is also very interested in the world of photography and videography, Label N is very suitable to accommodate him by working on hobbies.Then the author proposed the democratic leadership style that was applied, he replied that it was done so that all members feel comfortable and safe when they want to propose a new idea or innovation for their next works that will be published by them on Youtube channel they manage.The democratic leadership style they apply is also very influential in the development of Label N because the founder is very open to accepting new opinions and proposals so that there is no feeling of pressure on members which will result in Label N being stuck in place and members not feeling comfortable.However, there are indeed obstacles when there is a discussion because this democratic leadership style sometimes leads to debates in which one party wants to be heard more and vice versa.And sometimes there are times when decisionmaking is often hampered due to the complexity of the discussion which ends up being a debate.
In the second interview by participant 1 Aria Tri Cahya who serves as Editor as well as Director Of Photography (DOP), the question asked by the author is not too far from the previous question asked for the founder of Label N, entering Label N because he has a hobby that is closely related to the world.the film, namely video editing and Label N accommodates him to produce new works and hone his skills, especially with the concentration of lectures he takes are very related, so he is interested in joining Label N, besides that Label N indeed applies a democratic leadership style where all members can be active and participatory in providing new ideas and new content which can develop Label N itself.Participants felt that the Founder of Label N was very open and often communicated anything, both in the progress of production or in receiving opinions from members because, on the other hand, Label N was formed from people with the same college major background and their ages were not much different.so every discussion held is very open.But indeed because it is also often an obstacle, every discussion sometimes leads to a coachman's debate between one another so sometimes it is difficult to find the right solution to solve the problem.
In the third interview by participant 2, namely Thalia Maharani Albitha as Co-founder, she also helped build Label N. The interview conducted by the author is not far from the questions previously asked about Label N itself and also the democratic leadership style applied by Label N.He said that initially this production house was built only as a need for college assignments, then one of his friends, the founder of Label N himself, had the intention to build this channel because he felt he needed a place for creative people and then made it and his friends are also not far away.the same hobby finally the entry of members who do have expertise in this field and can think creatively.Then the application of this democratic leadership style is carried out so that there is a sense of justice and is not too rigid to carry out production later, and he also knows that the members at Label N are people who like to be creative and are not rigid in one rule that can hinder new ideas.they.In addition, the people who are in Label N are of the same age level and the founder wants to create Label N as not only a production house but also a home of friendship for them so that members can feel comfortable and feel safe when expressing their opinions or ideas.Then he also said that there are often obstacles when discussing it because indeed there are differences of opinion from all parties so it is difficult to find a solution.
From the interview above, which the author has described, there are characteristics in the ideal democratic leadership style applied in Label N. Quoted from the online article glints.comregarding democratic leadership styles and also its characteristics (Quamila, 2021), Cleverism is an organizational psychologist of German-American descent, Kurt Lewin, said there are three core elements of democratic leadership, namely (1) The leader expects subordinates to report on the progress of the task ;(2) Leaders expect subordinates to show maximum confidence and ability to get things done without constant supervision ; (3) Leaders expect subordinates to involve others in the decision-making process and not act alone.
In addition to the three elements above, some of the main characteristics of democratic leadership also include (1) Group members are encouraged to share ideas and opinions, although the leader remains the hammer on final decisions; (2) Group members feel more involved in the decision-making process so they are more likely to care about the outcome.
From the above characteristics, Label N has fulfilled the characteristics of a democratic leadership style which (1) The leader of Label N often communicates with all members regarding production progress, this can be seen from the interview with participant 1; (2) The founders run that they can accept members' opinions very openly because this is to build Label N itself; (3) The founders involve many people to come to a mutual agreement so that there is no sense of injustice.
A democratic leader is someone who is good at building consensus and making decisions together with the help of others.They will often discuss things with the group and come up with general steps that everyone can follow.If needed, the leader will suggest different ways to go about reaching a common goal.Members are free to work with whomever they choose and the division of labor is based on what the group decides is best.These characteristics are closely related to Label N where this production house is very suitable for implementing a democratic leadership style where everyone can be responsible and given authority.Members can also be allowed to provide suggestions for the advancement of Label N. Because Label N requires creative people, democracy is one way to develop this thinking.The leader also has the responsibility to control his members and to coordinate to stay out of the production progress that is being carried out.The advantages of this democratic leadership style : (1) Launching Very Well Mind, researchers found that democratic leadership is one of the most effective styles; (2) The reason is, that this method increases the work productivity of each member drastically, makes better contributions from group members, and also increases group morale; (3) The leadership style encourages creativity and respects the voice of each member; (4) They tend to be practically committed and inspired to contribute because they have a stronger sense of belonging in the group; (5) In addition, this leadership style involves assessing feedback between leaders and subordinates.Leaders can judge the performance of their members and vice versa.
The deficiency of this democratic leadership style; (1) The course of discussion to make decisions will turn ugly if each member of the group cannot communicate well; (2) In one group using many voices, the delivery of new views and opinions may overlap each other; (3) Instead of being productive, the leader must be more active in his role as a "referee" to mediate between each party so that all voices can be heard; (4) In addition, the decision-making process may also be hampered if each member, including the leader, does not have good problem-solving skills; (5) Instead of quickly reaching a solution, the coachman's debate only complicates and prolongs the discussion; (6) In some cases, group members may also not have the knowledge or skills expected to make a quality contribution to the decision-making process.This democratic leadership style is identical to the "Four Likert system leadership style" in the fourth system, namely the Inviter-Participant.This style is one of the most sporty because every member of the organization can communicate freely, openly, and frankly (Faules, 2015).This style has the goal of the organization running well through the participation of each of its members.On the organizational climate of several theories that have been put forward and the results of interviews conducted, Label N has a concept of organizational climate that is closer to the three needs theory from McClelland (Faules, 2015) as the main type, motivation, it was found that the three needs are influenced by organizational climate.There are also nine dimensions of organizational climate, namely structure, responsibility, reward, risk, friendliness, warmth, support, standards, conflict, and identification (Ruliana, 2018) : (1) Structure.The structure reflects the feeling that Label N members are well organized and have clear definitions of roles and responsibilities in each of their divisions; (2) Standards.Measuring the feeling of pressure to improve performance and the degree of pride that members in each division have in doing their job well.Includes working conditions experienced by members in presenting an idea; (3) Responsibility.Reflect members' feelings that they are their leaders and never seek advice on their decisions from others.Includes independence in completing work, in this case only sometimes when obstacles in finding ideas together; (4) Confession.The feeling of being properly rewarded after a job well done.Including rewards or wages, in this case, members will be given an appreciation for the performance that has been given in providing a work program that has goals, in the form of praise, bonuses in the form of money if they win the event, and others; (5) Support.Reflect employees' feelings of trust and mutual support that prevail in the workgroup.Including relationships with other co-workers, and a family that is the basis for the formation of an organizational climate at Label N which makes running work programs very balanced; (6) Commitment.Reflects feelings of pride and commitment as a member of the organization.Includes employee understanding of the goals to be achieved by the company, in this case, Label N has a vision, mission, and goals that are strung together into a unit to advance the Label N organization.
A conducive work environment has a direct effect on the organizational climate system at Label N in improving the performance of members.A Label N work environment is good because the members and the core daily board members go on a vacation together to become a very balanced thing so that they can carry out their activities in an optimal, healthy, safe, and comfortable way.Based on this, it can be said that if the work environment is improved, the performance of members in Label N can also increase.A conducive organizational climate provides a sense of security and allows members to work optimally.A conducive organizational climate is needed to support the implementation of tasks in each division.Organizational climate is an important factor in efforts to improve performance in each Label N division.The higher the organizational climate, the better the results of Label N members.Advantages for the organization Label N. Organizational climate is closely related to the people (departments) who perform organizational tasks to achieve organizational goals.Organizational climate is also closely related to individual perceptions of the organization's social environment, which affects the behavior of the organization and its members.
CONCLUSION
In the leadership style, Label N tries to establish a democratic leadership style.However, this democratic leadership style which is similar to the "four Likert system leadership style" in the fourth system, can run more smoothly if all members' views can be open in accepting each other's opinions so that when discussion no debate causes conflict.Members are also advised to keep the main focus in the discussion so as not to get out of the topic being discussed.In addition, because this leadership style is the "four Likert system leadership style" in the fourth system, we must be able to respect ideas that are data from other members, this is intended to keep things conducive to the ongoing discussions.The organizational climate can be a bridge that connects management and member behavior in realizing the achievement of organizational goals.As well as a tool for members of the Label N organization to understand the prevailing order in the work environment and provide instructions to them to adjust to the organization.A conducive organizational climate can be used to improve performance in the Label N organization.Organizational climate can affect the behavior of each individual so that member behavior such as satisfaction, motivation, and commitment can be increased.organizational climate can be seen as a key variable of organizational success. | 2023-05-05T15:08:13.261Z | 2023-03-31T00:00:00.000 | {
"year": 2023,
"sha1": "c5fe0bf13ab1e97befee63d29651ad4483f956c6",
"oa_license": "CCBYSA",
"oa_url": "https://journal.trunojoyo.ac.id/komunikasi/article/download/18355/7970",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1da41c719138b906a88af7644e803054f6f826c1",
"s2fieldsofstudy": [
"Education",
"Business"
],
"extfieldsofstudy": []
} |
57530008 | pes2o/s2orc | v3-fos-license | Implementation of the design concept of a high-speed processing cycle for CNC machines in the form of a software module CAM-system
In this paper, the authors considered the factors of the technological system that affect the performance of high-speed operation, and formulated recommendations for designing high-speed operations on CNC machines, as well as the directions for automating calculation data based on CAM systems.
Introduction
In modern engineering, high-speed processing with abrasive wheels is used to produce products with high-precision dimensions and low roughness. Modern high-tech CNC machines for high-speed machining allow processing as a grinding sequences consisting of several stages. The above sequences for a CNC machine is represented as a consecution of commands in G-codes according to the international standard ISO, recorded in the control program.
When developing the grinding sequences and the control program, it is necessary to take into account the whole complex of requirements for roughness and accuracy of the machine surface, for the absence of burns, for the intensity of tool wear, for the power of the machine. It is also necessary to ensure maximum performance. Manual development of a control program that takes into account all the above factors is very difficult, which forms the problem to be solved. It is proposed to develop a software module based on advanced scientific research and allowing to design a machining cycle which is based on the data entered by the user and output it as a control program file.
For development a software module, it is necessary to build an algorithm for its functioning. In the algorithm, it is necessary to enclose a scientific methodology for designing a high-speed processing cycle for CNC machines. Considering the research on the design of high-speed machining cycles, we note that in this area there are both foreign [1][2][3][4][5][6] and domestic researches [7][8][9][10][11][12][13][14][15]. A proprietary method of designing a high-speed machining cycle for a CNC machine is proposed, allowing to calculate process rates with automatic generation of cycle steps.
Design technique for high-speed processing cycle
The construction of the grinding sequences is made in the coordinate system "radial feed / stock". After the formation of the grid, a certain consecution of checking constraints is set when moving along the "radial feed / stock" grid. First, the feed is calculated that meets the requirements for the surface roughness of the part at a given frequency of rotation of the workpiece. Further, this feed is compared with the nameplate data of the machine. When constraint satisfaction on the surface roughness of the part, the following is checked for the limitation on the required drive power for a given value of the radial feed. When the specified conditions are met, the limit force, at which the grinding wheel will intensively wear out, and the main component of the cutting force are calculated. Then, the temperature in the treatment zone and the depth of burn on the workpiece surface are calculated for a given radial feed. The obtained value of the burn's depth is verified by comparison with the residual value of the stock. Next, the specified radial feed is checked by the magnitude of the elastic deformations in the technological system. When the above conditions are met, the grid is moved in the coordinate of the stock at the specified feed and the constraints described above are recalculated.
Consider the order of formation of the high-speed processing cycle on a specific example ( Figures 1-6). Figure 1 shows the grid step with the radial feed rate S1 and the stock h1. At point 1, the calculation of the limitations of the radial feed S1 was made on the nameplate data of the machine, the surface roughness, the required drive power, friability of the grinding wheel. This feed has passed this block of limitations, therefore, the burn depth hburn is calculated. It can be seen from the graph that the depth of the burn is limited by the residual stock. Therefore, the actual change of the radial feed is calculated taking into account the actual stiffness of the technological system (curve 1) from the preselected feed S1 to the feed Smin, in which the requirements on the surface roughness of the part are fulfilled. As a result of the calculation, the actual coordinate of the removed stock 2* shows the difference from the specified value in the coordinate of the "stock" (point 2), but at the same time the elastic deformations limit in the technological system is satisfied.
To test the possibility of improving the resulting two-stage grinding sequence, a repeated step is taken along the axis of the "stock" with the feed S1 (see figure 2). At point 2, the calculation of the radial feed limit S1 to the depth of burn is also calculated. When limiting the burn depth, the actual change in the radial feed is calculated taking into account the actual rigidity of the technological system (curve 1) from the preselected feed S1 to the feed Smin, in which the surface roughness of the part is fulfilled. As a result of the calculation, the actual coordinate of the removed stock 3 * shows the difference from the specified value in the coordinate of the "stock" (point 3), but the requirement to limit the feed depends on elastic deformations in the technological system is accomplished (curve 1).
Similarly, to assess the possibility of improving the grinding sequence, a displacement along the axis of the "stock" is performed with S1 fed to point 3 (see figure 3). However, the preselected feed passes only the first four limits. When calculating the burn limit, the figure shows that the residual stock is less than the burn depth and when calculating the decrease of feed S1 to feed Smin, the radial feed is not limited by elastic deformations in the technological system, that is, the actual feed decreases at 4 * limits of residual stock at point P0 (curve 1). Thus, this option is not accepted and the return along the allowance to point 2 is made and the feed rate S1 is reduced to feed S2 (see figure 4). With a new feedrate value S2, the displacement along the coordinate of the stock from point 3 to point 4 is performed, followed by the calculation of the limitation on the depth of burn and elastic deformations in the technological system. In this case, the feed S2 passes the power limit and the burn depth. After, decrease of feed S2 to feed Smin (curve 1) is simulated. The feed reduction is made at the 5 * point, which shows the implementation of the limit on elastic deformations in the technological system.
To test the possibility of improving the three-stage grinding sequence, the coordinate is moved along the "stock" at feed point S2 to point 5 (see figure 5). It can be seen from the figure that the feed S2 does not pass the limitation on the burn depth of at point 6 and the magnitude of elastic deformations in the technological system while feed reduction at point 6 *, since there is a departure from the coordinate at point P0 (curve 1). Therefore, this move on the grid is discarded and returns to point 4, followed by a decrease the feed S2 to the feed S3 at point 5 (see figure 6.). This feed goes through force and restrictions on the burn depth when moving to point 6. Next, decreasing of the feed rate S3 to the feed Smin at point 7 is simulated. From the figure, the feed S3 passes the limits of elastic deformations in the technological system and a four-step grinding sequence with removal residual stock at point P0 is formed. Thus, according to the developed design methodology, the first version of the permissible grinding sequence is formed, which is written into an array of cycles at a given frequency. Then the initial feed in the grid S1 is reduced by a given step hs and the limits are recalculated to assess the admissibility of this move.
After calculating the limits, the same as in the first variant of the cycle, the remaining stages of the grinding sequence are calculated. As a result of the n-th number of limits in the initial feed of the cycle and repeated calculations, a second permissible grinding sequence is formed, which is written into the cycle array at a given frequency of rotation of the workpiece. Thus, the initial feed rate is reduced in the next version of the cycle until the initial radial feed rate S1 is not equal to Smin, that is, a singlestage grinding sequence is formed. As a result, the cycle array will contain the n-th set of permissible grinding sequences with different number of cycle steps. From this array of cycles, the option with the shortest processing time is selected. If the requirements for limits are met, then this cycle is recorded optimally at a given frequency of rotation of the workpiece. If the requirements are not met, this cycle is removed from the array of cycles and the verification calculation of the next cycle in performance. This consecution is repeated until the tested cycle satisfies the requirements for the specified accuracy of the part. The most productive grinding cycle that fulfills technological limitations is recorded as optimal at a given frequency of rotation of the workpiece.
Development of software module for CAM-system
Based on the described methodology, a software module for calculating the cycle of high-speed processing was developed. In this software module, the user sets the source data for the following groups of parameters: equipment parameters, tool parameters, workpiece parameters, coefficients to take into account the temperature in the processing area and the wear of the grinding wheel. The interface of the program module is presented in figure 7. After entering all the necessary data in the form presented in figure 7, the user needs to press the "Start Cycle Calculation" button, thereby launching the execution of the internal algorithm of the program. After completing the design by the program module of the cycle, a file with a control program is generated in the folder (see figure 8). This file can be transferred to the machine in any way possible and run without prior adjustment.
Conclusion
Thus, a software module was developed based on the high-speed processing cycle design method for CNC machines. The cycle allows to develop an effective and most productive cycle of high-speed processing on CNC machines based on the data entered by the designer. The developed module is intended for software engineers at machine-building enterprises, and can also be integrated into the CAM system for its further use. | 2019-01-23T17:05:20.128Z | 2018-12-04T00:00:00.000 | {
"year": 2018,
"sha1": "3fa482dbb405b83d555fbf77b7da07cef97d21ea",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/450/3/032028",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "4ae7b8b0c9363d2779c7d94c81929f648af68b9f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
271652394 | pes2o/s2orc | v3-fos-license | Comprehensive Analysis of Environmental Monitoring Data from the Department of Nuclear Medicine and Molecular Imaging (NMMI) of the University Medical Center Groningen (UMCG)
: Environmental monitoring (EM) is the cornerstone for the assurance of sterility during aseptic manufacturing. In this study, the EM quality aspects in the radiopharmaceutical cleanrooms of the University Medical Center Groningen (UMCG), The Netherlands, were evaluated. Hereto, data obtained from EM over the period 2010–2022 were analyzed. The data were sorted according to the Good Manufacturing Practice (GMP) classification of the respective premises with their corresponding limits, and frequencies of excursions were determined per location. The frequency of conducted measurements gradually increased between the start and end of the assessed period. There was a trend of increased action limit excursions observed between 2010–2022. We found that EM in grade A areas appeared to be significantly less compliant with GMP specifications than the combined data from all sampled premises at the facility ( p < 0.00001; two-sided Fisher’s exact test). A trend was found for reduced action limit excursions for passive air sampling and particle counting, suggesting improved GMP compliance over time for this specific type of EM. The contamination recovery rate (CRR) found for cleanroom conditions, around 10%, was considered sufficient. From this comprehensive data analysis, we learn that, in order to be fully compliant with the requirements set in the recent revision of EU (European Union)-GMP Annex 1 ‘Manufacture of sterile medicinal products’ (in force as of 25 August 2023), strategies to further improve product protection are justified. For example, improved cleaning and disinfection procedures, more efficient working methods as well as optimization of the conditions under which aseptic manufacturing is performed are to be considered.
Introduction
To guarantee the quality of pharmaceutical products as well as patient safety, their manufacturing must comply with strict regulations as set by governments and professionals.Application of the Good Manufacturing Practices (GMP) guidelines is the global standard in this respect [1][2][3].
At the Department of Nuclear Medicine and Molecular Imaging (NMMI) of the University Medical Center Groningen (UMCG), The Netherlands, annually circa 1500 batches of radiopharmaceuticals for diagnostic (positron emission tomography (PET), single-photon emission computed tomography (SPECT)), and therapeutic purposes are produced in cleanroom facilities under GMP conditions.These products are not only used in the UMCG itself, but also distributed to other hospitals in the Netherlands.For the manufacturing of radiopharmaceuticals, especially GMP Annex 1 ('Manufacturing of sterile medicinal products') [4] and its recently revised version [5] as well as Annex 3 ('Manufacturing of radiopharmaceuticals') [6] are important.
To assess and assure quality and compliance with GMP, environmental monitoring (EM) is a keystone in the protection against microbiological contamination in the manufacturing premises [7].Radiopharmaceuticals are mostly administered parenterally and due to their short shelf life, microbiological contamination can only be detected after the product has been administered to the patient [8].Radiopharmaceuticals are therefore, as described in Annex 17 'Real time release testing and parametric release', released conditionally (without microbiological testing) by the qualified person (QP), based on full control of the production process and documented results of EM [9].Another challenging issue is the radiation safety requirements such as under-pressure (regularly minus 10 Pa) instead of over-pressure which may affect the microbiological situation in the cleanroom.
According to GMP, several methods of EM must be used.Although the frequencies may differ, regular monitoring schedules consist of active and passive monitoring, as well as RODAC sampling (contact plates), finger prints, and particle counting.Using the data of EM at the Department of NMMI collected over the period 2010-2022, it was our objective to assess the compliance with GMP and to critically evaluate the performance and quality of the manufacturing of radiopharmaceuticals.Over the years, trends and deviations of the EM of the cleanrooms were critically analyzed and discussed.This approach is useful as a self-reflection on the work at the Department of NMMI, to identify weak and strong points, and to support continuous improvement by means of the plan-do-check-act (PDCA) cycle.It may serve as an example for other GMP production units for radiopharmaceuticals with recommendations to further improve the quality of the products and to reduce the risk of microbial contamination.
As of 25 August 2023, all GMP-units (in hospitals, industry, etc.) must comply with the new and revised EU (European Union)-GMP Annex 1 [5].The guidelines for microbiological monitoring have been tightened as compared to the former Annex 1 (in force until 25 August 2023) [4].Since the production of radiopharmaceuticals at the Department of NMMI has now to be conformant with the revised Annex 1, the evaluation of the EM data of 2022 is performed based on the revised EU-GMP Annex 1 while the entire data set is mirrored to the former EU-GMP Annex 1.
Premises and General Description of Environmental Monitoring (EM) Procedures
Within the Department of Nuclear Medicine and Molecular Imaging (NMMI) of the University Medical Center Groningen (UMCG), The Netherlands, a Single-Photon Emission Computed Tomography (SPECT) section, a Positron Emission Tomography (PET) section (up until 2018), and a Quality Control (QC) lab are present, including premises that comply to GMP classes A, B, C, or D [1].Throughout the years, EM was executed according to the (in house) Standard Operating Procedure (SOP) 'Monitoring cleanrooms NMMI' which is based on the former EU-GMP Annex 1 [4].The SOP contains information regarding locations for sampling and sampling frequency.In total, 63 sampling locations are included.
During each preparation of a radiopharmaceutical, the indicated EM was executed according to protocol during operation.During each preparation session in a Grade A zone, this consisted of one passive settle plate placed on an indicated critical spot in the particular laminar air flow (LAF) hood or hot cell.All other procedures (active and contact plates) were performed separate from the preparation process.Appropriate agar plates obtained from active and passive air sampling and those from RODAC sampling (contact plates) were sent to the Department of Medical Microbiology (MMB) of the UMCG and incubated at 35 • C for 7 days, after which the total number of colony-forming units (cfus) and the species of the microorganisms were determined.Typically, Staphylococcus species (but no S. aureus), Micrococcus luteus, and Corynebacterium species have been identified from our cleanrooms.Data were sent back to the Department of NMMI, recorded, and critically assessed.Action was undertaken in case of an excursion.During the period 2010-2022 the former EU-GMP Annex 1 was used as a guideline [4], requiring that in grade A areas the average number of cfus found for EM tests should be below 1.An overview of all maximum action limits of particles and viable particle contamination according to EU-GMP Annex 1 [5] is presented in Table 1.
Table 1.Overview of all maximum action limits of particles and viable particles according to the recently revised EU-GMP Annex 1 [5].
Passive Air Sampling
Passive air sampling was performed per production by placing settle plates (tryptone soya agar (TSA), diameter 90 mm, No. K123 P090CR, Biotrading, Mijdrecht, The Netherlands) at predetermined locations in hot cells and in the LAF cabinets.The results were expressed as cfus/4 h exposure time.
Particle Counting
Particle counting has been performed monthly from 2010-2020 and continuously from 2020 onwards at predetermined locations, compliant with the recommendation of daily monitoring for critical locations [10].Particles of both of 0.5 µm and 5 µm (viable and non-viable) were counted by using particle counters (Lasair II 310B, Particle Measuring Systems, Boulder CO, USA; Kalibra particle counters, Delft, The Netherlands).The results were expressed as particles/m 3 .The Lasair II particle counters are operated for 4 min in cleanrooms class C and D and for 36 min in hot cell/LAF hoods in cleanrooms class A and B. The time difference relates to the different number of particles per air volume in the various classes of cleanrooms.The Kalibra particle counters count continuously (24 h/7 days).
Data Analysis
The obtained data were sorted according to the GMP classification of the respective premises with the corresponding limits [1], and frequencies were determined per location.The data were then grouped into one of the three categories: below the alert level (compliant), between the alert level and the action limit (alert), or above the action limit (excursion) [4,5].The total number of measurements within each category was counted for each sampling location and each test, and frequency tests were performed to detect possible trends or outstanding results for different locations, GMP cleanroom classes, or seasons and years.
A statistical evaluation was performed using a two-sided Fisher's exact test.A pvalue < 0.05 was considered significant.With this test, the independence of two categories with each two variables can be calculated from the hypergeometric distribution, which is applicable for the data [11].To create two subgroups for the Fisher's exact test, the data that fell in the compliant category were compared to the data of the combined alert and excursion categories.
The contamination recovery rate (CRR) over time was calculated and analyzed for grade A areas for the results of the combined active air sampling, passive air sampling, and contact prints.The CRR is defined as the percentage of samples that show any microbial recovery, irrespective of the number of colony-forming units (cfus) [12] and calculated as follows: CRR = (Number of samples with counts ≥ 0 cfu/Total number of samples collected) × 100% Frequency measures were performed on all other data sets, the active air sampling, particle count data, and contact print data, allowing a comparison of the seasons, the years, and of different cleanroom classes.
Combined Data for Premises
The combined EM data for all areas of the Department of NMMI facilities over the period 2010-2022 are shown in Figure 1.
For passive air sampling (Figure 1A), a trend of increasing testing frequency is seen from the introduction of measuring up to 2018.Then, a small dip follows in the period 2019-2021, with the measurement total of 2022 being similar to the level of 2018.A significant decrease (p < 0.05, two-sided Fisher's exact test) in alert level outcomes is seen throughout the years (p < 0.00001).Between the years, excursions of the action limit varied in number (4-86) and as a percentage of the total measurements (2.8-6.4%,4.5% on average).No significant trend of decrease or increase was found.
For active sampling (Figure 1B), an increase in the number of measurements is seen from 2010 up until 2013, with a dip in 2012.From 2013, however, a gradual decline in the monitoring frequency is visible.There is no up-or downward trend for alert excursions, but a significant increase in action limit excursions is seen in the most recent years, starting in 2018, with only little variation between the compliant measurements of these years (p < 0.00001).
The frequency of RODAC sampling (Figure 1C) increased gradually since the start of the data collection, with the exception of the year 2011 in which an increased number of measurements was carried out compared to the previous year.Both alert and action limit excursions strongly varied between the years, resulting in no significant trend in these data categories.
Particle counting (Figure 1D,E) started in 2011.Initially, the number of measurements was high but decreased as of 2014.Alert and action limits were only significantly overrun in 2015 and 2016.Action limit excursions for particle counting stayed at a low-end level and remained more or less constant from 2017 onward.There was no difference between 0.5 and 5 µm particles.The particle counting data as presented from 2021 onward are incomplete as from that date continuous particle count monitoring was performed.These data turned out to be far more complicated to compile in a report.The most important reason for this is the vast amount of available data points. .Green: below the alert level (compliant).Yellow: between the alert level and the action limit (alert).Red: above the action limit (excursion).
For passive air sampling (Figure 1A), a trend of increasing testing frequency is seen from the introduction of measuring up to 2018.Then, a small dip follows in the period 2019-2021, with the measurement total of 2022 being similar to the level of 2018.A significant decrease (p < 0.05, two-sided Fisher's exact test) in alert level outcomes is seen throughout the years (p < 0.00001).Between the years, excursions of the action limit varied in number (4-86) and as a percentage of the total measurements (2.8-6.4%,4.5% on average).No significant trend of decrease or increase was found.
For active sampling (Figure 1B), an increase in the number of measurements is seen from 2010 up until 2013, with a dip in 2012.From 2013, however, a gradual decline in the monitoring frequency is visible.There is no up-or downward trend for alert excursions, but a significant increase in action limit excursions is seen in the most recent years, starting in 2018, with only little variation between the compliant measurements of these years (p < 0.00001).
The frequency of RODAC sampling (Figure 1C) increased gradually since the start of the data collection, with the exception of the year 2011 in which an increased number of measurements was carried out compared to the previous year.Both alert and action limit excursions strongly varied between the years, resulting in no significant trend in these data categories.
Particle counting (Figure 1D,E) started in 2011.Initially, the number of measurements was high but decreased as of 2014.Alert and action limits were only significantly overrun in 2015 and 2016.Action limit excursions for particle counting stayed at a low-end level and remained more or less constant from 2017 onward.There was no difference between 0.5 and 5 µm particles.The particle counting data as presented from 2021 onward are incomplete as from that date continuous particle count monitoring was performed.These data turned out to be far more complicated to compile in a report.The most important reason for this is the vast amount of available data points.(compliant).Yellow: between the alert level and the action limit (alert).Red: above the action limit (excursion).Figure 2A shows a decrease in total measurements throughout the years in alignment with the data shown in Figure 1A, only having slightly higher percentages of alert excursions (range 2.76-6.35%,mean 4.54% for grade A cleanrooms versus range 4.26-11.19%,mean 7.65% for all locations).
Data for Grade A Areas
The results for active air sampling in grade A areas (Figure 2B) show a similar trend as for active air sampling in all areas combined at the Department of NMMI (Figure 1).A decline in measurements is seen after 2011, but from 2014, a gradual increase is seen up to 2022, with a small dip from 2019-2020.Again, a trend is seen of increasing action limit excursions.
RODAC sampling in grade A areas (Figure 2C) shows an increase in measurements over time, as well as an increase in excursions of the action limit.Compared to RODAC sampling of all areas combined (Figure 1C), a more gradual development is seen in grade A areas (Figure 2C), while both show a decline in 2014.
Particle counting measurements for grade A rooms (Figure 2D,E) closely resemble those for the complete department (Figure 1D,E), having a fast and then gradual decline in measurements carried out.There was no difference between 0.5 and 5 µm particles.
For each individual grade A area, all measurements available for the four types of EM from 2010-2022 were sorted and combined.These results are shown in Figure 3.
Each area has a different purpose and it can be seen that some locations are used more extensively than others.For example, for grade A areas, the measurements are more or less proportional to the number of preparations due to passive air measurements being carried out during each manufacturing process.The other sampling types are routine and should not vary much between the locations.No clear trend is visible from Figure 4, but based on a Fisher's exact test, the SPECT glovebox (GB) was found to significantly differ from all the other locations in terms of excessive excursions of the action limits.SPECT 1, SPECT 2, and SPECT 3 had the lowest number of excursions, being statistically significant compared to the other areas, except for PET 3.
Data for Grade A Areas
Figure 2 depicts the data obtained from measurement locations that are graded class A according to the GMP.Comparison of the results of all grade A areas with other rooms (grade C and higher) yielded a p-value of 0.00001, indicating a significant difference between grade A and non-grade A areas. Figure 2A shows a decrease in total measurements throughout the years in alignment with the data shown in Figure 1A, only having slightly higher percentages of alert excursions (range 2.76-6.35%,mean 4.54% for grade A cleanrooms versus range 4.26-11.19%,mean 7.65% for all locations).
The results for active air sampling in grade A areas (Figure 2B) show a similar trend as for active air sampling in all areas combined at the Department of NMMI (Figure 1).A decline in measurements is seen after 2011, but from 2014, a gradual increase is seen up to 2022, with a small dip from 2019-2020.Again, a trend is seen of increasing action limit excursions.
RODAC sampling in grade A areas (Figure 2C) shows an increase in measurements over time, as well as an increase in excursions of the action limit.Compared to RODAC sampling of all areas combined (Figure 1C), a more gradual development is seen in grade A areas (Figure 2C), while both show a decline in 2014.
Particle counting measurements for grade A rooms (Figure 2D,E) closely resemble those for the complete department (Figure 1D,E), having a fast and then gradual decline in measurements carried out.There was no difference between 0.5 and 5 µm particles.
For each individual grade A area, all measurements available for the four types of EM from 2010-2022 were sorted and combined.These results are shown in Figure 3. Yellow: between the alert level and the action limit (alert).Red: above the action limit (excursion).
Each area has a different purpose and it can be seen that some locations are used more extensively than others.For example, for grade A areas, the measurements are more or less proportional to the number of preparations due to passive air measurements being carried out during each manufacturing process.The other sampling types are routine and should not vary much between the locations.No clear trend is visible from Figure 4, but based on a Fisher's exact test, the SPECT glovebox (GB) was found to significantly differ from all the other locations in terms of excessive excursions of the action limits.SPECT 1, SPECT 2, and SPECT 3 had the lowest number of excursions, being statistically significant compared to the other areas, except for PET 3.
Figure 4 shows the timeline of the mean percentage of excursions of the data of all EM techniques combined, for the entire Department of NMMI and the grade A areas separately.The percentage of excursions shows a similar trend for both with the exception of 2020-2022, in which grade A rooms had a higher relative occurrence of excursions.(compliant).Yellow: between the alert level and the action limit (alert).Red: above the action limit (excursion).
Figure 4 shows the timeline of the mean percentage of excursions of the data of all EM techniques combined, for the entire Department of NMMI and the grade A areas separately.The percentage of excursions shows a similar trend for both with the exception of 2020-2022, in which grade A rooms had a higher relative occurrence of excursions.
Hygiene 2024, 4 290 SPECT 2, and SPECT 3 had the lowest number of excursions, being statistically significant compared to the other areas, except for PET 3.
Figure 4 shows the timeline of the mean percentage of excursions of the data of all EM techniques combined, for the entire Department of NMMI and the grade A areas separately.The percentage of excursions shows a similar trend for both with the exception of 2020-2022, in which grade A rooms had a higher relative occurrence of excursions.
Seasonal Variation
Data of the period 2010-2022 of the passive air sampling measurements were sorted according to the time of the year they were measured.The years were divided into four trimesters (Q1, Q2, Q3, and Q4) to assess any possible influence of the seasons on the outcomes of the passive air sampling.From Table 2, it appears that in proportion the data are similar (alert limit excursions 1.05-1.33%,action limit excursions 3.99-4.86%).The frequency of measuring was different, with Q2 having approximately 1000 fewer measurements than Q4.Only Q3 significantly differed from Q2 (p = 0.0058) and Q4 (p = 0.007).There were no further significant differences between trimesters.(compliant), between the alert level and action limit (alert), and above the action limit (excursion) per quarter over the period 2010-2022.
Contamination Recovery Rate
The contamination recovery rate (CRR) is the incidence rate of any level of microbiological contamination of the environmental samples taken and was used in addition to the frequency measurements.Because the revised EU-GMP Annex 1 does not allow any microbiological growth for locations with classification A (in contrast to the former version of the EU-GMP Annex 1), the CRR can be used as a measure for the current state and if improvements must be made to meet these requirements.From the Fisher's exact test, it became clear that the observed differences in CRR for the different types of microbiological monitoring were significant (Table 3).When comparing the CRR of grade A areas over the years 2010-2022, it becomes visible that the first two years (2010 and 2011) and the last two years (2021 and 2022) showed the highest percentage.Between those years, the CRR fluctuated between 4.6% and 8.6% with 2014 being significantly lower than all years except for 2013 and 2017 (Figure 5).
Discussion
The production of radiopharmaceuticals at the Department of NMMI of the UMCG is performed under GMP conditions as legally required.The EU-GMP Annex 1 [4] and as of 23 August 2023 the revised version [5] is an important guideline that must be adhered to for aseptic handling and for minimizing microbial contamination.To assure the quality of the sterile environment, EM is routinely performed and the results are recorded and archived.According to GMP guidelines, corrective and preventive actions (CAPAs) are taken whenever necessary.This article analyzed the EM data collected over the period 2010-2022 to assess any trends and possible deviations, and to find out how compliance with GMP has influenced the quality of the conditions in the cleanrooms over the years.
The frequency of conducted EM gradually increased between the start and the end of the assessed period.There was a trend of increased action limit excursions observed between 2010-2022 for active air sampling and RODAC sampling.
Figure 1A shows a clear drop in the number of outcomes during 2020.This may be explained by the COVID-19 epidemic, causing an abatement in the number of scans.Therefore, radiopharmaceuticals were needed to a lesser extent which is reflected in decreased monitoring data [13].
Differences in numbers between the sampling techniques over the years can be explained by many factors, e.g., additional samples were taken at some locations and sometimes rooms fell out of use for GMP practices, which applies to the SPECT section from 2019 on.
The sample frequency of passive air sampling was influenced by the number of preparations, which varied between the years depending on the number of patients and scans to be carried out.Incidentally, measurements were not carried out (forgotten, human error, lack of staff) and were in most cases corrected by an extra measurement at the beginning of the following month.Sometimes, two measurements were taken close to each other, while a third would be measured far away from the second, resulting in a larger gap between these last two measurements and thus creating a period with no data.Both situations could be because the sampling dates differ per location.A practical solution would be to perform the measurements at a fixed time, e.g., the first day of each month; such a deviation could be prevented.
It appeared that the overall proportions between compliant outcomes and the excursions of the action limit were larger in the data of grade A area (Figure 2) than in the data
Discussion
The production of radiopharmaceuticals at the Department of NMMI of the UMCG is performed under GMP conditions as legally required.The EU-GMP Annex 1 [4] and as of 23 August 2023 the revised version [5] is an important guideline that must be adhered to for aseptic handling and for minimizing microbial contamination.To assure the quality of the sterile environment, EM is routinely performed and the results are recorded and archived.According to GMP guidelines, corrective and preventive actions (CAPAs) are taken whenever necessary.This article analyzed the EM data collected over the period 2010-2022 to assess any trends and possible deviations, and to find out how compliance with GMP has influenced the quality of the conditions in the cleanrooms over the years.
The frequency of conducted EM gradually increased between the start and the end of the assessed period.There was a trend of increased action limit excursions observed between 2010-2022 for active air sampling and RODAC sampling.
Figure 1A shows a clear drop in the number of outcomes during 2020.This may be explained by the COVID-19 epidemic, causing an abatement in the number of scans.Therefore, radiopharmaceuticals were needed to a lesser extent which is reflected in decreased monitoring data [13].
Differences in numbers between the sampling techniques over the years can be explained by many factors, e.g., additional samples were taken at some locations and sometimes rooms fell out of use for GMP practices, which applies to the SPECT section from 2019 on.
The sample frequency of passive air sampling was influenced by the number of preparations, which varied between the years depending on the number of patients and scans to be carried out.Incidentally, measurements were not carried out (forgotten, human error, lack of staff) and were in most cases corrected by an extra measurement at the beginning of the following month.Sometimes, two measurements were taken close to each other, while a third would be measured far away from the second, resulting in a larger gap between these last two measurements and thus creating a period with no data.Both situations could be because the sampling dates differ per location.A practical solution would be to perform the measurements at a fixed time, e.g., the first day of each month; such a deviation could be prevented.
It appeared that the overall proportions between compliant outcomes and the excursions of the action limit were larger in the data of grade A area (Figure 2) than in the data of all rooms combined (Figure 1).Thus, the grade A areas complied significantly less with GMP specifications than the combined data from all sampled premises at the Department of NMMI (p < 0.00001; two-sided Fisher's exact test).This cannot be explained by the seasonal conditions (Figure 5, Table 1).This is likely to be due to the stricter limits of GMP for grade A areas.Before 2023, one single cfu per, e.g., 2-3 months was regarded as a more or less regular excursion, as long as the average amount of cfus was far below 1.Furthermore, sampling was performed more frequently in grade A areas, e.g., passive air sampling outcomes for grade A area added up to about half of that passive air sampling of the entire department over the period investigated.In contrast, for grade C hot cells, the risk of microbial excursions is neglectable.
The glovebox A cabinet revealed more microbiological excursions than the other grade A areas (p < 0.0001; two-sided Fisher's exact test) (Figure 3), which may be explained the higher number and the complexity of manual activities performed in this cabinet during radiopharmaceutical production combined with a rather small working surface.Moreover, when many objects are placed on this surface, this will enhance the contamination risk.
As no structural adjustments were made in the procedures at the Department of NMMI during the period of 2010-2022, the only partial reason for the increase in relative excursions in recent years for active air sampling and contact plates (active monitoring) would be aging of the extensively used equipment.As the staff are known to be the largest source of microbial contamination, the most likely causes are suboptimal personnel gowning, room ventilation, and/or surface disinfection.All can be positively influenced by increasing the motivation and training of the personnel.The COVID-19 pandemic may also have had a negative influence on these matters due to shortages in disinfectants and gowning materials as a result of this outbreak [14].This was, however, not seen for passive air monitoring.Theoretically, all sampling techniques monitor similar sources, e.g., contact plates and passive air sampling resemble each other to great extent, as both sample settling microorganisms.Due to the lower frequency of contact plates sampling and active air sampling, these methods only provide a snapshot compared to the more frequent passive air sampling.The increase in the number of contact plate sampling in 2011 may be explained by the ongoing cleanroom qualification of the new PET-GMP cleanroom in that year.
The most viable data are those obtained from passive air sampling because this was performed during every production, being considerably more frequent than for the other sampling methods.A rise in the number of measurements from 2010 to 2018 was seen, which is in accordance with the increasing number of radiopharmaceuticals that were produced during this period.There was a subsequent dip in the number of monitoring samples between 2019 and 2022.This may be attributed to replacement of an air filter in one of the LAF cabinets in 2019 that could therefore not be used for production and hence no monitoring was performed.Furthermore, the number of productions for 2021 and 2022 was lower than in previous years due to the decreased number of scans performed during the COVID-19 pandemic, causing a concomitant reduction in the number of monitoring samples.
A general trend visible for all EM methods was the decrease in the number of measurements between the alert level and action limit.A plausible explanation is that, since 2011, the rules within the Department of NMMI have been sharpened for locations with grade A areas.For the years 2012-2022 the action limit for active air sampling, passive air sampling, and contact plates in grade A was 1 cfu.
Figure 4 shows the percentage of total excursions in each year for all EM tests combined, for the complete Department of NMMI as well as for only the grade A areas.The data are comparable for both graphs, although grade A areas start to have slightly higher percentages of excursions from 2020 onwards, which resembles the increase in excursions shown in Figure 1B,C and Figure 2B,C.It is likely that the excursions in the grade A areas were due to the causes mentioned earlier.
Table 2 presents passive air monitoring data of the period 2010-2022 divided into four trimesters (Q1-Q4).Seasonal conditions are a major influence on the survival of airborne bacteria, though important factors as humidity and temperature need to be in certain ratios to one another for each specific species of bacterium or fungus to cause this survival [15].Each trimester has its own specific temperature and humidity, resulting in different conditions for microorganisms to thrive.It appears that, even though the weather conditions of the respective trimesters differ [16], this is not reflected in major changes in excursions.A plausible reason for this is that the temperature and air humidity in the cleanroom during the year are kept constant.
The percentage of measurements containing microbiological contamination was the lowest for passive air sampling and the highest for the contact prints.This indicates that contamination may be caused by improper cleaning of the starting materials or by staff working sub optimally regarding aseptic handling.Another issue is that it becomes increasingly difficult to clean aging facilities and equipment.When the data of active air sampling, passive air sampling, and contact prints are compared to the overall data during 2010-2022, an increase in the CRR was noticed in 2021-2022.
The CRR is defined as the incidence of any microbial growth over time, indifferent of the number of cfus found.The United States Pharmacopeia (USP) requires a CRR less than 1% to be satisfactory for aseptic handling for ISO 5 cleanrooms, which can be compared to EU-GMP grade A. Less than 1% is however very hard to achieve in practice, and a CRR of less than 10% is reasonable according to Boom et al. [12].When this is applied, the greater part of the recorded period 2010-2022 satisfies this demand, having only crossed this limit in the early years and once in 2021.This was mainly caused by an increased percentage of excursions for active air sampling and contact prints.The CRR of passive air sampling did not increase or decrease remarkably over the years.Therefore, not only contamination caused by an infected workbench, but also contamination from the air should be reduced for the coming years.It is difficult to say what exactly caused the increase in CRR, but to find this out more background data regarding the working method, materials used, including their contamination rate, the way staff are trained aseptically, the different radiopharmaceuticals that are produced, etc., need to be obtained.
Many pharmacy-associated concerns in Europe use either the GMP Annex 1 or a plan that is derived from these guidelines.The goal is to establish a highly effective quality assurance system [17].The revision of Annex 1 has come with many changes, with some clarifying previously not well-defined subjects and statements, and some setting new demands for cleanrooms, disinfecting, and microbiological monitoring, whereby the revision came into effect from 25 August 2023.Also, in the United States the requirements for microbiological conditions in cleanrooms have recently been redefined in chapter <797> of the United States Pharmacopeia [18].
Due to the stricter requirements of the new version of Annex 1 of the EU-GMP, we needed to change the focus of this paper.Especially, the change in the grade A requirement of <1 cfu to =0 cfus for all microbial parameters has a great impact.Under the Annex 1 from 2008, this implied that an incidental excursion would not be that severe, but when contamination is encountered frequently, action should be undertaken.In contrast, in the revised version of the EU-GMP Annex 1 this is changed into no growth at all, implying a major additional task for quality assurance and cleanroom staff.When a cfu is found in a grade A area, an investigation regarding the cause should be started, taking up valuable time and resources.As aseptic handling is performed manually, contamination is possible, no matter what precautions are taken [12].For complex aseptic preparations, a microbial shelf life of one week is acceptable [7].As most radiopharmaceuticals have a considerable shorter physical and chemical shelf life, applying this guideline may lead to an overestimation of measures taken to mitigate risks of microbial contamination of a radiopharmaceutical preparation.
Future improvements may encompass the introduction of an automated process by, e.g., robots and modules as well as the prevention of direct human manipulation before, during, and after aseptic procedures in a grade A zone with B background.Furthermore, disinfection of all introduced materials in a grade A zone must be optimized and evaluated.Spraying methods should be replaced by mechanical surface cleaning using towels [19], while the use of sporicidal cleaning agents should prevent contamination with Bacillus species.Monitoring in class A areas is to be restricted to critical steps only (e.g., aseptic filling, sampling for sterility control, assembly of a sterile filter).Finally, systematic application of both glove and finger prints needs to be implemented.Up until now, this is only performed in the media fill setting.
EM is a necessary tool to constantly assure the quality of the radiopharmaceuticals produced.By evaluation of the results collected from 2010-2022, the weakest points were found and based on those results, improvements can be made for the coming years to further increase the quality of the products.With the revision of EU-GMP Annex 1, it is important that the number of excursions is further reduced to still fulfil the requirements.
Limited data and different regulations during the first years of the analyzed period (2010-2022) made it less easy to compare this with data of later years.Furthermore, some data were not recorded, like minor accidents or events that could have influenced the conditions in the different cleanrooms.This leads to the absence in causal links to some trends found in the data and resulting in no clear cause, making it impossible to recommend specific improvements for any found deterioration, especially for the early parts of the assessed period.
Conclusions
Based on the comprehensive retrospective analysis of the EM data, it is concluded that, in order to become fully compliant with the requirements set in the recent revision of EU-GMP Annex 1, strategies to further improve microbiology and thereby the quality of the radiopharmaceutical products at the department of NMMI of the UMCG are advisable.These include: improvement of cleaning and disinfection procedures; more efficient working methods and further automation; optimization of all conditions under which aseptic manufacturing is performed; broader application of glove and finger prints.
Our analysis offers good insight into the dynamics of microbiological hygiene in an aseptic production facility under the influence of (minor) adaptations in procedures and equipment, and reflects the current situation in relation to GMP requirements in a historical perspective.Based on this, it is easier to take well-balanced decisions for necessary adaptations with the aim to fully comply with current, stringent regulations.
Figure 1 .
Figure 1.Combined EM data for all rooms of the Department of NMMI over the period 2010-2022.(A) Results of active air sampling.(B) Results of passive air sampling.(C) Results of RODAC sampling (contact plates).(D) Results of particle counting (0.5 µm).(E) Results of particle counting (5 µm).Green: below the alert level (compliant).Yellow: between the alert level and the action limit (alert).Red: above the action limit (excursion).
Figure 2
Figure 2 depicts the data obtained from measurement locations that are graded class A according to the GMP.Comparison of the results of all grade A areas with other rooms (grade C and higher) yielded a p-value of 0.00001, indicating a significant difference between grade A and non-grade A areas.Figure2Ashows a decrease in total measurements throughout the years in alignment with the data shown in Figure1A, only having slightly higher percentages of alert excursions (range 2.76-6.35%,mean 4.54% for grade A cleanrooms versus range 4.26-11.19%,mean 7.65% for all locations).The results for active air sampling in grade A areas (Figure2B) show a similar trend as for active air sampling in all areas combined at the Department of NMMI (Figure1).A decline in measurements is seen after 2011, but from 2014, a gradual increase is seen up to 2022, with a small dip from 2019-2020.Again, a trend is seen of increasing action limit excursions.RODAC sampling in grade A areas (Figure2C) shows an increase in measurements over time, as well as an increase in excursions of the action limit.Compared to RODAC sampling of all areas combined (Figure1C), a more gradual development is seen in grade A areas (Figure2C), while both show a decline in 2014.Particle counting measurements for grade A rooms (Figure2D,E) closely resemble those for the complete department (Figure1D,E), having a fast and then gradual decline in measurements carried out.There was no difference between 0.5 and 5 µm particles.For each individual grade A area, all measurements available for the four types of EM from 2010-2022 were sorted and combined.These results are shown in Figure3.Each area has a different purpose and it can be seen that some locations are used more extensively than others.For example, for grade A areas, the measurements are more or less proportional to the number of preparations due to passive air measurements being carried out during each manufacturing process.The other sampling types are routine and should not vary much between the locations.No clear trend is visible from Figure4, but based on a Fisher's exact test, the SPECT glovebox (GB) was found to significantly differ from all the
Figure 2 .
Figure 2. EM data for all grade A cleanrooms of the NMMI department over the period 2010-2022.(A) Results of active air sampling.(B) Results of passive air sampling.(C) Results of RODAC (contact plates).(D) Results of particle counting (0.5 µm).(E) Results of particle counting (5 µm).Note that for both (D,E) data for 2022 are missing.Green: below the alert level (compliant).Yellow: between the alert level and the action limit (alert).Red: above the action limit (excursion).
Figure 2 .
Figure 2. EM data for all grade A cleanrooms of the NMMI department over the period 2010-2022.(A) Results of active air sampling.(B) Results of passive air sampling.(C) Results of RODAC (contact plates).(D) Results of particle counting (0.5 µm).(E) Results of particle counting (5 µm).Note that for both (D,E) data for 2022 are missing.Green: below the alert level(compliant).Yellow: between the alert level and the action limit (alert).Red: above the action limit (excursion).
Hygiene 2024, 4 , 9 Figure 3 .
Figure 3. Combined data of all EM sampling techniques from the period 2010-2022 of each individual grade A area at the Department of NMMI.Green: below the alert level (compliant).Yellow: between the alert level and the action limit (alert).Red: above the action limit (excursion).
Figure 3 .
Figure 3. Combined data of all EM sampling techniques from the period 2010-2022 of each individual grade A area at the Department of NMMI.Green: below the alert level(compliant).Yellow: between the alert level and the action limit (alert).Red: above the action limit (excursion).
Figure 4 .
Figure 4. Percentages of excursions of all EM sampling techniques combined for each year from 2010-2022 for the complete Department of NMMI (solid line) and for the EU-GMP grade A areas (dashed line).
Figure 4 .
Figure 4. Percentages of excursions of all EM sampling techniques combined for each year from 2010-2022 for the complete Department of NMMI (solid line) and for the EU-GMP grade A areas (dashed line).
Hygiene 2024, 4 , 11 Figure 5 .
Figure 5. Contamination recovery rate (CRR) per year for areas with GMP classification A for combined active air sampling, passive air sampling, and contact prints over the period 2010-2022.
Figure 5 .
Figure 5. Contamination recovery rate (CRR) per year for areas with GMP classification A for combined active air sampling, passive air sampling, and contact prints over the period 2010-2022.
Table 2 .
Percentage of EM measurements below the alert level
Table 3 .
Contamination recovery rate (CRR) of areas with GMP classification A for combined active air sampling, passive air sampling, and contact prints over the period 2010-2022. | 2024-08-03T15:13:36.420Z | 2024-07-27T00:00:00.000 | {
"year": 2024,
"sha1": "bcd754d3352d9347a2087a1340e92959c5a9737b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2673-947X/4/3/23/pdf?version=1722073757",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "39ac0c4d7b9143ead3c27f8fe1e7bbc892e481ca",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": []
} |
233260809 | pes2o/s2orc | v3-fos-license | EXPTIME Hardness of an n by n Custodian Capture Game
: Custodian capture occurs when a player has placed two of his pieces on the opposite sides of an orthogonal line of the opponent’s men. Each piece moves like the rook in Chess. Different cultures played it from pre-modern times in two-player strategy board games, Ludus Latrunculorum (Kowalski’s reconstruction), Hasami shogi in Japan, Mak-yek in Thailand and Myanmar, Ming Mang in Tibet, and so on. We prove that a custodian capture game on n × n square board is EXPTIME hard if the first player to capture five or more men in total wins.
Introduction
Custodial capture occurs on a square lattice board when a player has placed two of his pieces on the opposite sides of an orthogonal line of the opponent's men. Different cultures played it in two-player strategy (i.e., perfect information) board games [1,2]. In A History of Chess [3], after introducing Tafl games in Nordic and Celtic, Murray noted that "the method of capture in this game is identical with that in the unknown Latin game Ludus latrunculorum, in the game which Firdawsī attributes to Buzūrjmihr in the Shāhnāma, the Egyptian sīga, and a few other Eastern board-games." His other volume, A History of Board Games other than Chess [4], classified battle games "by, first, the method of capture, beginning with those that employ the interception capture-the oldest form of capture in war games-and, second, by the type of move employed." A section titled "Games with interception capture and orthogonal moves" introduced Seega in Egypt, Mak-yek in Thailand, Apit-sodok in Malaysia, Hasami-Shogi in Japan, Gala in Sulawesi, and so on. Hasami-Shogi remains still popular among Japanese children.
Although different cultures specified their own rules, this paper takes the following ones:
R.1
Each player moves one of his pieces in his turn. R. 2 Each piece may move any orthogonal direction and distance with no obstacle (like the rook in Chess). R. 3 When a player succeeds in custodial capturing enemy's men in his turn by moving his piece adjacently next to them, the game removes the men. R. 4 Repeating sequences of moves are prohibited: if the same position of his men occurs three times, by the same player to move, he must vary his choice to avoid the repetition. R. 5 The first player to capture enough total number of an opponent's men throughout the game wins.
They resemble Hasami-Shogi and Ludus latrunculorum (Kowalski's reconstruction). Maak yék (Captain James Low's writing [5]) and Ming Mang [6] have similarities in R. 2 and R.3. However, the other reconstructions of Ludus Latrunculorum, Tafl games [7], and many others cannot capture multiple men in a line together. These games take the usual starting positions, e.g., placing the all WHITE's (resp. BLACK's) men on the lowest (resp. highest) ranks. They often force the game (without R.4) to be a draw since the players may take defensive strategies that never allow the opponent to capture the player's men.
Once fixing the game rules, combinatorial game theory asks which player to win from the men's given position on the board. The answer may depend on various board sizes and initial configurations specific to each culture. Computational complexity aims to let ideal game machines (i.e., Turing machines) answer this question for any given board size and initial setup. The future machine could be much faster than the current ones, even solving any finite-size problem in a moment, so it concerns n by n board games rather than the finite board ones interesting for human players. Computational complexity classifies them into the hierarchy PTIME ⊂ PSPACE ⊂ EXPTIME ⊂ EXPSPACE. PTIME is the class of problems solvable in polynomial (n k ) time, PSPACE polynomial space, EXPTIME exponential (c n k ) time, and EXPSPACE exponential space (for some constant c > 1 and k > 0). For example, Chess [8], Checker [9], and Go (Japanese ko-rule) [10] on n by n board are EXPTIME complete. In other words, a machine for n by n Chess solves any EXPTIME problem by playing Chess starting from a position encoding the problem. Consequently, those games' best algorithms might be no faster than searching the vast memory holding all possible c n 2 positions of the pieces on n by n board, although EXPTIME = PSPACE is merely a conjecture. One thing for sure is EXPTIME = PTIME, i.e., Chess, Checkers, and Go on n by n board are unsolvable in polynomial time for modern computers. See any computational complexity textbook for more detail, e.g., [11,12].
This paper studies the computational complexity of R.1-R.5 under a straightforward winning rule, the winning number of R.5 is five. It can be any number (no smaller than five) but fixed independently of the number of opponent pieces, although most war games set it equal to (or a few less than) that of the opponent's pieces. Theorem 1. The custodian capture game of R.1-R.5 on the n × n square board is EXPTIME hard.
Thus, a simple custodian capture game is as hard to play as modern complex games like Chess, Checker, and Go (Japanese ko-rule) on arbitrary board sizes and configurations.
It could be more complicated since R.4 resembles the Chinese ko-rule of Go, prohibiting to return to any position of the pieces on the board that has occurred previously. Chinese ko-rule Go is EXPSPACE by making an archive of all visited places, whose computational complexity lies anywhere between PSPACE [13][14][15][16][17] and EXPSPACE. Theorem 1 proves the complexity of the custodian capture game between EXPTIME and EXPSPACE using R.4 in essential. Some other games, e.g., Chess [8] and Checker [9], enjoy both EXPTIME completeness by allowing repetition and EXPSPACE completeness by prohibiting it [18].
A Proof Outline
Stockmeyer and Chandra [19,20] presented the famous G-series (G 1 -G 6 ) for typical EXPTIME-complete games under log-space many-one reduction. For example, G 2 in Figure 1 proceeds by alternating turns between WHITE and BLACK, permitting WHITE to switch one of his Boolean (TRUTH or FALSE valued) variables (X 1 or X 2 ) or pass (changing no variable) to make his winning formula W-WIN TRUE. BLACK does it for her variables Y 1 , Y 2 , and formula B-WIN. The first player to make it at the ending of his turn wins. For example, suppose that the G 2 game starts from (X 1 , X 2 , Y 1 , Y 2 ) = (FALSE, FALSE, FALSE, TRUE) and WHITE to move. WHITE switches X 1 : FALSE → TRUE to make two (X 1 and ¬X 2 ) of the three literals in a term A 3 TRUE while preserving all B terms of the opponent's 3-term Disjunctive Normal Form (DNF) B 1 ∨ B 2 ∨ B 3 FALSE even after switching any variable. BLACK responds by Y 1 : FALSE → TRUE for making three (X 1 , ¬X 2 , and Y 1 ) of the four literals in B 3 TRUE, which is a wrong move since A 3 becomes TRUE; WHITE wins in the next turn by either changing Y 2 or passing. Stockmeyer and Chandra proved that any of the G i games could simulate any EXPTIME problem by adjusting the winning DNFs. Fraenkel and Lichtenstein [8] demonstrated that Chess on n by n board could solve G 3 , so any EXPTIME problem; playing Chess starting from an initial position implementing the G 3 mechanics forces the players to play the G 3 game. The previously known EXPTIME-hardness results relied on one of the G-series [8][9][10][21][22][23]. We establish Theorem 1 by reducing their G 2 game to the custodian capture game on the n × n board with R.1-R.5. Definition 1 (G 2 game). The G 2 game takes τ, W-WIN(X, Y), B-WIN(X, Y), α for a configuration. It consists of a turn player τ ∈ {WHITE( • ), BLACK( • )} making the first move, 12DNF formulae τ-WIN(X, Y) representing τ's winning condition, and an initial boolean assignment α ∈ {0, 1} X∪Y . From disjoint sets X and Y of boolean variables, WHITE can change only those in X, while BLACK can only Y. The game starts from a given configuration and proceeds by alternating turns in which turn player changes at most one of his variables. The first player to satisfy his winning condition at the ending of his turn can win the game.
R.5 allows a player to sum up the numbers of men captured in different places and win, making the number of pieces so far captured part of a game configuration. However, it will never happen in our proof, i.e., the players will win by taking enough men at once.
Gadgets in Dead-Lock
R.1 and R.2 of the custodian capture game provide a high degree of freedom to choose whichever men to move toward orthogonal directions and how long-distance with no other men. We will build a configuration of the custodian capture game so that player τ can move none of his men in the initial position, called dead-locked pieces, excepting the only one man, named the free τ. The players must play an escape-and-chase game of moving only their free men. We can build all gadgets (groups of pieces placed on the game board) using W-walls ( Figure 2) and its dual B-walls ( Figure 6 provides a base gadget to make all other gadgets from its modifications. They fill the entire gameboard by laying the minimum blank squares, the gray zone called a passage, to separate the walls. Figure 6 is dead-locked.
Lemma 1.
Proof. We show that if WHITE releases (i.e., moves) any man dead-locked in Figure 6, he loses immediately. By symmetry, we cover only a few cases. Suppose WHITE releases a wall's • in Figure 7 and captures a single • as Figure 8. A neighbor • intrudes into the cracked square (i.e., the places becoming empty by custodian capture) in Figure 9 and recaptures more than five • in Figure 10. BLACK wins by R.5. If WHITE releases a corner's piece horizontally, he must soon lose by Figure 6 locates stoppable squares over the crossings , called stops. The free men can move along the passages freely but must halt at only these stops. Figure 6 forces the free men to rest at only the stops.
Lemma 2.
Proof. If the free • rests at any non-stop in Figure 19 and captures a single • as Figure 20, • intrudes into the cracked square in Figure 21, recaptures more than five • in Figure 22, and wins. It is the same for the non-stoppable places near corners inside the focuses B and C, as shown in Figures 11-18.
Obstacles
The corridor may give too much freedom to play any meaningful game. Figure 23 modifies it to block the passages. We call this new gadget an obstacle and abbreviate it in Figure 23 is dead-locked.
Lemma 3.
Proof. If WHITE releases the obstacle's • in Figure 27, • comes into the abandoned square (i.e., the place becoming empty by a man's departure from there) in Figure 28 and wins by capturing many • in Figure 29. Figure 30 is dead-locked.
Lemma 4.
Proof. If BLACK releases a left wall's man in Figure 33, • comes into the abandoned square in Figure 34 and wins by capturing five • in Figure 35.
Weak Points
Obstacles and one-sided parking squares in the previous sections explicitly bound the playing field on the gameboard. On the other hand, τ-weak ones, τ-timers, and τ-magnets in the succeeding ones induce invisible forces bounding the player τ's move. Figure 40 is a B-weak point, modifying Figure 6 inside the focus. It contains a checkmate square . WHITE "checkmates" if he goes there in his turn. We abbreviate it and the dual as in Figures 41 and 42.
Timers
Cooperating with W-weak points, B-timers oblige the players to engage in the escapeand-chase game forcing the free • to escape and free • to chase. A B-timer gadget, Figure 81 has the twin checkmates , pulling the free men approaching it and finally catches them and bounds them staying around it. This phenomenon can make a switch gate (see Figure 105) to take only the binary states. We thus call it a W-magnet and abbreviate it and its dual in Figures 82 and 83.
Gates
This section provides all gates for proving Theorem 1. In their Figures, (A, B) . These geometric objects provide sets of stoppable squares (stops and parking squares), written by |S| for their numbers in an object S, and τ = S for τ ∈ S.
As already proved in the previous section, the players must move only their free men, so τ ∈ { • , • } often represents the free τ and its standing square in each Figure. We write (τ,τ) = (A, B) → (C, D) to say that (τ,τ) = (A, B), τ is the first to go from A to C, thenτ from B to C in the next turn. Whenτ is the first, we should write (τ, . The checkmate squares in weak-points, timers, and magnets are the only places where the players can force a win. We refer to these by their gadget's names and measure the distance (i.e., minimum steps) d τ (A, G) for τ at A to reach G's checkmate. Let b τ ∈ {0, 1} be the indicator that τ is the first to move. When the free (τ,τ) = (A, B), the freeτ must take one of the following three strategies against τ's checkmate at G. Protect G by Ifτ fails all of them,τ must lose G and the game (τ wins G and the game), elseτ survives G.
In proving Theorem 1, winning by R.4 is hard to confirm since it must analyze the repetition (a.k.a., cycle, closed-loop) of positions as inevitable. The escape-and-chase game can reduce R.4 to an attack rule R.4: Make a winning repetition. Suppose that the escaper must take the unique winning moves. Immediately after the escaper has visited the same square twice, the chaser will do the same to make a sequence of positions visited only once except the current one twice. The chaser repeats walking the same path several times, forces the escaper first to stop at the same square thrice, and wins R.4. Aida, Crasmaru, Regan, and Watanabe [24] (ACRW) proved that any two-player strategy board game is extendable to another game with all positions possessing the unique winning move for either player. This remarkable result allows us to adopt R.4 instead of R.4 for proving Theorem 1.
Definition 3 (unique winning moves).
A two-player game with no infinite repetitions divides all possible positions to either player's winning ones. It has the unique winning moves if each has at most one winning move for either player in the following sense. If a player is the first to move at his winning one, he has only one action leading to his new winning one. If the opponent is the first to move there, all moves lead to the player's winning positions.
Definition 4 ( R.4). In a two-player board game, a game position is a pair π = (π τ , πτ) of τ's men's position π τ andτ's one πτ. Let #π τ be the number of times that τ's men have ever visited π τ from the beginning of the game. Repetition is a closed sequence π 1 → . . . → π k = π 1 of game positions induced by the players' moves. It isτ's winning one if ∀i, #π i,τ ≤ #π i,τ . If a player succeeds in making his winning repetition by his move, he wins.
Lemma 12.
In any two-player board game with unique winning moves, R.4 implies R.4.
In the current Section 3 except Section 3.6, R.4 applies π i to the free men's position ( • , • ), although π i must look at all men on the board. Theorem 1's proof will justify it in the last Section 4.
One-Way Roads
Figures 25 and 26 are straight and L-bend passages along which the free men can move in both forward and backward directions. One-way roads attach to them B-timers and W-weak points in Figure 99 such that the players must proceed only in the forward direction. WHITE must escape on these one-way roads, and BLACK must chase by the unique alternative moves. We will describe it in Figure 99 (with marks in Figure 100). WHITE must escape along with A → B → D → F, and BLACK must chase him with no delay. Notice that Figure 100 repeats congruently so that LA and FK (resp. AB) build the congruent gadget with that standing on BD (resp. DF).
Bridges
A bridge stands over a crossing point of two one-way roads prohibiting the players from changing their directions; they must go straight and never to the left or right. Figure 101 draws a bridge. The players coming from the left (resp. down) must go to the right (resp. up) direction.
In the third turn, • cannot go backward unless losing w1 as above. The • may stick to FT1 to checkmate at T1, but • can wait for it in LO on the assumption |FT1| ≤ |LO|
Branches
A branch is a T-junction where the escaper can choose the direction to proceed, to the left or right, by his choice. Figure 102 draws a W-branch, where the players coming from down must go together to either left or right by WHITE's determination of the directions.
Junctions
Junctions gather the one-way roads multiplied by branches into one. Figure 103 builds it such that the players coming together from either the left or right must go in the upward direction.
Starts
Starts are the gates where the free men alternate their roles in the escape-and-chase game. Figure 104 is a B-start gate where the free • turns from escaper to chaser and the • from chaser to escaper. After that, the men may stick to • ∈ KC and • ∈ IH. If one man reaches {K, I}, the other must do and make a position ( • , • ) = (K, I) in the next turn for • (resp. • ) to block • 's (resp. • 's) access to w2 (resp. w3).
After that, to not delay the game, • must go immediately to L and beyond by R.4 since • can wait in O longer than • in LO on the assumption |OI| = 0 and |LO| ≤ |O|.
Consequently, Lemma 13's dual forces ( • , • ) = (K, I) → (L, LI) → (P, L). Figure 105 are dead-locked except the bounded men on S1M and S2U , who can move only horizontally. Figure 105 dead-locked. If M's • moved horizontally, P's • would march to w2 and win, so the men standing on M, U, M , U are unmovable. Thus, the men at P, R, P , R are by construction. If the bounded • on S1M moved vertically, Lemma 11 would force BLACK's win by first moving M 's • to the S1's right side checkmate, and secondly M's • to the other checkmate.
Proof. We have already shown all gadgets inside
The • at Z moves horizontally to BLOCK the free • 's passing through a W-winning lane crossing at Z . Concretely speaking, Z is one of the stops Z i,sw in Figure 106 (placed at the end of this section). Bounded-• standing on Z take five moves to reach the nearest W-weak point w(Z ) in the W-win lane; we write d • (Z , w(Z )) = 5. All of these G, G , Z, and Z should take a long distance away from the other switch parts. The left-hand side of Figure 105 configures |S1M| = |S2U|, |N1L| = 0, |FN1| ≤ |N1|, |BT1| ≤ |CD| = |IK| (by placing a W-parking on CD), and |CD| + |EN1| ≤ |N1|; the right-hand side does the same. We make a one-to-one correspondence between S1M and S2U along with E ↔ L and write • ≡ • when • ∈ S1M and • ∈ S2U occupy the corresponding places.
We call Figure 105 a W-switch since it takes either the ON or OFF state to check the winning conditions. When WHITE comes there to change between ON and OFF, he swaps the free men with the bounded men. When the free men reach there, we denote them as Figure 105 . WHITE can choose which men's sides become the next free men (the remainders are the bounded men). Figure 105 shows an OFF state having ( •b , •b ) ∈ (S1M , S2U ). Lemma 19. Suppose WHITE's move incurs •f ∈ S1M and •b ∈ S1M , BLACK can win by two movements. Lemma 20. Figure 105 for WHITE to move forces either → (S1M, S2U, V , F ) or (V, F, S1M , S2U ), i.e., the switch ON or OFF.
Proof. If WHITE will switch ON, he must move ( •b , •b ) = (E , Y ) → (F , Y ) for the following reason in any case of Y = L or Y = L (i.e., Y ∈ S2U − L ). Any horizontal (X1 , w4 ), or incurring BLACK's winning repetition by where the same analysis forces WHITE's comeback (X1 , L ) → (E , L ). The •b could never win T1 against •b 's protection by d • (Y , J ) ≤ 3 = d • (E , T1 ). R.4 forbids •b to stay in F N1 since •b can wait for it in N1 on the assumption |N1 L | = 0 and |F N1 | ≤ |N1 |. Eventually, •b must go to F immediately to not delay the game.
Similarly, BLACK must respond as follows: The Y = L case forces either •b 's vertical move (by the same analysis) or •f 's hor- In the repetition, WHITE could never hope to win T1, T2, T1 , T2 , or w(G) as shown in Lemma 20; •f ∈ S1M − {E} forces WHITE to lose w4. He could never win w(G ) since WHITE's next move depends on either •b = E or •b = E (i.e., •b ∈ S1M − {E }). In the former case, WHITE can move to realize In any case, Lemma 20 allowed WHITE to switch ON or OFF.
As shown in Lemma 22, BLACK can never hope to win w(Z) nor w(Z ) in these Figure 106's construction.
WHITE may choose to enter from the wrong-side entrance of Figure 105, where the right men have already resided, but he cannot defeat BLACK nor change the switch's state. The assumption |CD| + |EN1| ≤ |N1| allows •f to wait in N1 until R. 4 ( and Lemma 19) The former case forces BLACK to win w4 as shown in Lemma 20, and the latter one Figure 106's construction. The same analysis prohibits WHITE to win w(G), so WHITE must lose by R.4. Figure 106. W-win lanes crossing with W-switch.
Lemma 25. Let i = 1 or 2. Suppose Figure 106 starts Proof. When •b ∈ S2 sw U sw (i.e., •b ∈ S2 sw U sw ), Lemma 14 forces the claimed moves since Figure 106's gadget over Figure 101. 1 ). After that, the •f must go to either B i or non-B i and lose to BLACK's winning repetition (free • , free
EXPTIME Hardness
Theorem 2 (Theorem 1, formal). The G 2 -game is log-space many-one reducible to the custodian capture game of R.1-R.5 over the n × n square board and the winning number five.
Proof. We assume that Definition 1's G 2 -game solves a dichotomy (i.e., no draw) problem in EXPTIME. We prove that the G 2 -game's winner, say BLACK is the custodial capture game's one. The same α never appears again in the same player's turn. Thus, the positions of the bounded men under correspondence also do not do it under correspondence We embed a G 2 game's configuration by Figure 107's mapping to the n × n board's custodian capture game. It combines the W-field on the upper half and the dual B-field on the lower half. We will explain only W-field. It simulates the WHITE's one turn in the G 2 game by a series of alternating turns in the custodian capture game, beginning from the W-start gate on the left and ending in the B-start gate on the right. The players must mimic an escape-and-chase match (to avoid immediate defeat) and proceed along continuous solid lines. Lemma 14 allows the crossing of the lines with no interference. Lemma 17's dual starts WHITE's turn from the W-start gate. At the W-branches above the W-start, Lemma 15 allows WHITE to choose his variable, say X 1 , to change. Lemma 23 changes the state of the W-switch[X 1 ] (or unchanged by his choice), and Lemma 16 merges the branches at the right W-joins to guide WHITE to the last W-branch above the B-start, where he can declare his win at the end of his turn. The declaration brings him to the top W-branches to pick his winning term, say A 3 in Figure 108, and walk along as described in Figure 109: Visit the right side of the W-switch[X 1 ] (Figure 105 Notice that Figure 106 may cascade more than two lanes for implementing the WHITE's 12-term DNF. Alternatively, the two cascades are enough for it by using a branch-switch We can use one of the two W-win lanes to force WHITE to choose the same state over these switches as follows: Let him choose ON or OFF and walk along with the route passing through Z or Z of these X i -switches (G or G of the Y i -switches), respectively, to make sure that the states of these switches are ON (resp. OFF). Figure 107 maps a G 2 -game instance into the custodian capture game of size O(|X| + |Y|) within at most O(log n) computational space in the following manner. Each W-switch corresponds to each occurrence of the X-variables. The W-branches on the left towards the W-switches include a binary tree of 2|X| leaves, where each pair connects to the left and right entrances of a W-switch. Similarly, the right W-junctions form a binary tree of the paired leaves connecting to the W-switches' exits. Each W-winning lane begins from a B-branch on the top, passing through switches, and ending at a W-victory gate. Lemmas 1, 3, 4, 6, 8, 10, and 18 demand each player to move only the single free man or one of many bound men. They will justify Lemma 12 to consider only the free and bounded men rather than all men on the board. Lemmas 13-17, 21, and 25 look at only the two free men's moves in their winning repetitions; Lemmas 19-20 and 22-24 only the four men inside a single switch. These Lemmas should count the bounded mens' positions in the other switching gadgets and apply Lemma 12. However, we can justify it as follows and complete the current theorem's proof.
ACRM proved that any two-person board game might not change the winner even if a turn player allows the opponent (or the game-solving machine) to select the player's moves to remain just one winning move. Any winning position possesses Definition 3's unique winning one in the ACRW's extension of the game. We may assume it for the current custodian capture game as well.
A mini-game starts when ( •f , •f , •b , •b ) = (E, L, E , L ) occurs in one switch and ends when it does in another one. During the mini-game, Let t count the number of moves, π(t) = (π f (t), π b (t)) = (π f , • (t), π f , • (t), π b, • (t), π b, • (t)) be the {free, bounded} × { • , • } men's positions at a given time t, and #π(t) the number of times visiting the position π from the beginning until the time t of the mini-game. It is a non-decreasing function reset to 0 when a new game starts. Suppose the opponent promises to take the deterministic (predetermined) moves against the player's unique winning ones. To not delay the game, WHITE should skip π(t) → π(t + k) in a repetition π(t) → π(t + 1) → · · · → π(t + k) = π(t) of his winning positions. Notice that the above Lemmas' repetitions could never get out the free men from the correponding Figures' entrances by Lemma 13.
Conclusions
We have proved that a custodian capture game over the n × n square board is EXP-TIME hard. It allowed for capturing multiple men in a line at once like Japanese Hasami-Shogi. Ludus Latrunculorum, Tafl games, and many others do not, whose computational complexity over n × n board is still unknown. Our proof relied heavily on the no-repetition rule R.4, which might establish even EXPSPACE-completeness by analyzing the bounded men's trajectories. The custodian capture game allowing repetition of positions might be EXPTIME-complete like Chess [8]. | 2021-04-17T13:17:01.616Z | 2021-02-24T00:00:00.000 | {
"year": 2021,
"sha1": "50a6ac6dc43fac47f76ac94e2af4fdba2d9e70e6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4893/14/3/70/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "9834f48a5d99c2a2cb49bf36af60311add220f8c",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
14618879 | pes2o/s2orc | v3-fos-license | Identifying the bottom line after a stock market crash
In this empirical paper we show that in the months following a crash there is a distinct connection between the fall of stock prices and the increase in the range of interest rates for a sample of bonds. This variable, which is often referred to as the interest rate spread variable, can be considered as a statistical measure for the disparity in lenders' opinions about the future; in other words, it provides an operational definition of the uncertainty faced by economic agents. The observation that there is a strong negative correlation between stock prices and the spread variable relies on the examination of 8 major crashes in the United States between 1857 and 1987. That relationship which has remained valid for one and a half century in spite of important changes in the organization of financial markets can be of interest in the perspective of Monte Carlo simulations of stock markets.
Stock prices and interest rates
In this paper we show that in the time interval between crash and recovery there is a clear relationship between price variations and the dispersion of interest rates for bonds of different grades (see below), i.e. what is usually called the interest rate spread. Before explaining this relationship in more detail let us emphasize that it was observed empirically from the mid-nineteenth century to the latest major crash in 1987. This is in strong contrast with so many "regularities" which are dependent upon specific business circumstances. Such is for instance the case of the interest rate itself. Because of the close connection between stock and bond markets one would expect a strong link between stock prices and interest rates. This is not the case however; there seems to be no permanent relationship between these variables; see in this respect the conclusions of [13] and [18, p.241]. It is true that sometimes a slight decrease in interest rates, by changing the "mood" of the market, suffices to send prices upward. Thus, in the fall of 1998 three successive quarter point decreases of the federal-fund rate (that is to say a global -0.75%) stopped the fall of the prices and brought about a rally. In other circumstances, however, even a huge drop in interest rates is unable to stop the fall of stock prices; an example is provided by the period from January 1930 to May 1931 when the interest rate fell from 6% to 2% without any effect on the level of stock prices; similarly in the aftermath of the 1990 crash of the Japanese stock market interest rates went down to almost zero percent without bringing about any recovery. One should not be surprised by the changing relationship between stock price levels and interest rates. Something similar can be observed in meteorology: sometimes a small fall in temperature is sufficient to produce rain, while in other circumstances a huge fall in temperature will not give any rain. In this case we know that the phenomenon has something to do with the hygrometric degree of the air; in the case of the stock market we do not really know which one of the many other variables plays the crucial role. In the light of such changing patterns the fact that the relationship between stock prices and the spread variable appears to be so robust and so stable in the course of time is worthy of attention.
Interest rate spread and uncertainty
It is a common saying that "markets dislike and fear uncertainty". In a strong bull market there is little uncertainty; for everybody the word of the day is "full steam ahead". The situation is completely different after a crash. There is uncertainty about the duration of the bear market; some would think that it will be short while others expect a long crisis. In 1990 when the bubble burst on the Tokyo stock market only few people would probably have expected the crisis to last for almost ten years. There is also uncertainty about which sectors will be the first to emerge from the turbulence: banks or investment funds, property funds or technology industry, etc. As we know the interest rate represents the price a company pays to buy money for the future. The more uncertain the future, the riskier the investment, the higher the interest rate. We will indeed see that during recessions interest rates often (but not always) show an upward trend. In addition, and this is probably even more important, the increased uncertainty produces greater disparity in the rate of different loans. This uncertainty has different sources (i)those who expect a short crisis will be tempted to lend at lower rates than those who fear a protracted recession (ii) the fact that there is no longer any "leading force" in the economy obscures expectations; therefore it becomes more difficult to make a reliable risk assessment for low-quality borrowers (representing the so-called low-grade bonds). In short, the interest rate spread gives us a means to probe the mood, expectations and forecasts of managers, a means which is probably more reliable than the standard confidence indexes obtained from surveys (in this respect see the last section).
Although in many econophysical models of the stock market [3,5,7,9,10,16] interest rates do not play a role per se, the fact that uncertainty is greater in the downward phase of the speculative cycle than in the upward phase could be built into the models by adjusting the randomness of the stochastic variables used in Monte Carlo simulations. In contrast, interest rates usually play a determinant role in econometric models. A particularly attractive model of that kind is the Levy-Levy-Solomon model; it describes the stock and bond markets as communicating vessels and how traders switch from one to the other. The book by Oliveira et al. [12, chapter 4] details the assumptions of the model, and, through simulations, explains how it works and to which results it leads.
The data
Monthly stock price data going back into the 19th century can be found fairly easily; possible sources are [4,8,17]. Measuring interest rate spreads is a more difficult matter. To begin with it is not obvious which estimates should be used. The primary source about bond rates is [8]; furthermore a procedure for constructing the spread measure was proposed in [11]. As a matter of fact Mishkin's stimulating paper provided the main incentive for the writing of the present paper. Mishkin proposed to represent the spread by the difference between the one-fourth of the bonds of the lowest grade (i.e. high rates) and the one-fourth of the bonds of the best grade (i.e. low rates). It turns out that even for the midnineteenth century Macaulay's data provided at least three bonds in each of these classes which is fairly sufficient to give acceptable accuracy; for the more recent period 1888-1935 there are as many as 10 bonds in each "quartile". For post-World War II crashes, Macaulay's series can be prolonged by the data in [1]. More detailed comments about how these two measures compare can be found in [11] 4 Results It can be seen that the decline in stock prices is mirrored in a similar increase in interest rate spread.
Connection between share prices and interest rate spread between crash and recovery
As a matter of fact the chronological coincidence between the troughs of the stock prices and the peaks of the spread variable is astonishing. Even for the 1929-1932 episode for which there is a 30-month span between crash and recovery the peak for the spread variable coincides almost to the month with the end of the price fall. The connection between both variables is confirmed by the correlation coefficients (left-hand correlations in Fig.1): they are all negative and comprised between −0.64 and −0.94; note that the smallest correlation (−0.64) corresponds to a relatively small crash with a fall in stock prices of less than 20%. For 19th century episodes the interest rate changes are more or less in the same direction as those of the spread variable; however the correlations with stock prices (right-hand correlations in Fig.1) are substantially lower. For 20th century episodes the picture changes completely: the interest rate no longer moves in the same direction as the spread variable; consequently these correlations become completely random in contrast to the correlations between stock prices and spread variable which remain close to −1. In the interpretative framework that we developed above we come up with the following picture. After a crash uncertainty, doubts and apprehension begin to spread throughout the market; usually (leaving 1929 apart for the moment) the fall last about 10 months; during that time, uncertainty continues to increase. Then, suddenly, within one month, the trend shifts in the opposite direction: price begin to increase and uncertainty to subside. One may wonder how the spread variable behaved in the bull phases. First of all one should note that not all the crashes that we examined were preceded by a wild bull market; so we concentrate here on three typical bull markets that occurred in 1904-1907, 1921-1929 and 1985-1987. During these periods the spread variable remained almost unchanged. Similarly during the period 1950-1967 which was marked by a considerable increase in stock prices (without however being followed by a major crash) the spread variable remained at a fairly constant level of 1.5%. In contrast during the period 1968-1979 which was marked by a downward trend in stock prices the spread variable was substantially larger in the range 2.5% -3.8%.
A simple look at the charts in Fig.1 confirms what we already know, namely that the crisis of 1929-1932 was quite exceptional. This is of course obvious in economic terms (unemployment, drop in industrial production, etc.); it is also true from a purely financial perspective. Stock prices plummeted from a level 100 to less than 20, and the spread variable increased from 2.5% to almost 8%, a three-fold increase. For other episodes (see table 1) the corresponding ratios are all below 1.85.
As an illustration of the intensity of the financial crisis one can mention the fact that November and December 1929 saw the failure of 608 banks; the crisis continued in subsequent months to the extent that in March 1933 one third of all American banks had disappeared ( [11]).
Connection between interest spread and market's uncertainty
In section 3 we interpreted the spread variable as characterizing the uncertainty and lack of confidence existing in the market at a given moment. This interpretation was based on plausible arguments but one would be on firmer ground if it could be supported by some statistical evidence. In this paragraph we provide at least partial proof in that respect by comparing the changes of the spread variable to the consumers' lack of confidence as measured by standard surveys. This is shown in Fig. 2.; it represents the spread variable along with the lack of confidence index in the United States in the period before and after the 1987 crash. Changes in the two variables are fairly parallel although the spread variable appears to be much more sensitive and displays larger fluctuations. In the two months before the crash of 19 October 1987 both the uncertainty (measured by the spread variable) and the lack of confidence (estimated through consumer surveys) increased by about 20%; after the crash both variables increased rapidly; but the after-effects of the crash were short-lived and uncertainty decreased after the beginning of 1988. If consumer confidence data could be found for the period prior to World War II it would of course be interesting to perform a similar comparison for other crashes.
Perspectives for an extension to other speculative markets
Relationships which have a validity extending over one century are not frequent either in economics or in finance. Yet, if the above observation remains isolated it will be hardly more than a technical feature of interest for stock market professionals. It is tempting to posit that an increase in uncertainty can play a similar role in other speculative markets. Stock markets are certainly special in so far as they are pure speculative markets; in contrast to property or commodities, stocks do not have any other usage for their buyer than to earn dividends. Nevertheless the stock market seems to be in close connection with the property market; historically stock market crashes have often been preceded by a collapse of property prices; see in this respect [6, p.65] and [14, p.76]. One problem with the property market is its long relaxation time. For that reason we consider here another case namely the market for gold, silver and diamonds. As is well known, starting in 1977 huge speculative bubbles developed in these items, which collapsed simultaneously in January 1980. Let us concentrate on the diamond market since the gold market has already been closely investigated particularly by A. Johansen and D. Sornette. In Fig.3 we represented the price of diamonds along with the consumer lack of confidence index that we already used above. Two observations can be made (i) There is a huge increase in the lack of confidence index between 1978 and the spring of 1980 that is to say during the period when the bubble developed. This shows that it would be vain to explore the diamond market (or silver/gold markets) in order to find specific causes for the collapse. It was most certainly triggered by exogenous, psycho-sociological factors. (ii) In the phase between collapse and recovery (March 1980-March 1986), in contrast to what we observed with stock prices, there is no connection whatsoever between diamond price changes and the fluctuations of the lack of confidence index. Perhaps the story would be different if one could use a confidence index specially pertaining to the diamond market. Fig.3 Comparison of the price of diamonds before the collapse of January 1980 with the evolution of the lack of confidence index. In the months before the market collapse the lack of confidence increased rapidly. However after the crash the lack of confidence index does not show the same pattern that we observed in Fig.1. The outcome would perhaps be different if we could use a confidence index focused on the diamond market. Sources: Gems and Gemology 24 (Fall 1998). | 2014-10-01T00:00:00.000Z | 1999-10-14T00:00:00.000 | {
"year": 1999,
"sha1": "52bf2c2e2dbaa6425b0917b27a896a644b3eca7c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/9910213",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "da44e1f0764b5ae508936f856ba79d936943cbfb",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics",
"Physics",
"Mathematics"
]
} |
198260374 | pes2o/s2orc | v3-fos-license | Early removal of senescent cells protects retinal ganglion cells loss in experimental ocular hypertension
Abstract Experimental ocular hypertension induces senescence of retinal ganglion cells (RGCs) that mimics events occurring in human glaucoma. Senescence‐related chromatin remodeling leads to profound transcriptional changes including the upregulation of a subset of genes that encode multiple proteins collectively referred to as the senescence‐associated secretory phenotype (SASP). Emerging evidence suggests that the presence of these proinflammatory and matrix‐degrading molecules has deleterious effects in a variety of tissues. In the current study, we demonstrated in a transgenic mouse model that early removal of senescent cells induced upon elevated intraocular pressure (IOP) protects unaffected RGCs from senescence and apoptosis. Visual evoked potential (VEP) analysis demonstrated that remaining RGCs are functional and that the treatment protected visual functions. Finally, removal of endogenous senescent retinal cells after IOP elevation by a treatment with senolytic drug dasatinib prevented loss of retinal functions and cellular structure. Senolytic drugs may have the potential to mitigate the deleterious impact of elevated IOP on RGC survival in glaucoma and other optic neuropathies.
| INTRODUC TI ON
Glaucoma is comprised of progressive optic neuropathies characterized by degeneration of retinal ganglion cells (RGC) and resulting changes in the optic nerve. It is a complex disease where multiple genetic and environmental factors interact (Skowronska-Krawczyk et al., 2015;Weinreb, Aung, & Medeiros, 2014). Two of the leading risk factors, increased intraocular pressure (IOP) and age, are related to the extent and rate of RGC loss. Although lowering IOP is the only approved and effective treatment for slowing worsening of vision, many treated glaucoma patients continue to experience loss of vision and some eventually become blind. Several findings suggest that age-related physiological tissue changes contribute significantly to neurodegenerative defects that cause result in the loss of vision.
Mammalian aging is a complex process where distinct molecular processes contribute to age-related tissue dysfunction. It is notable that specific molecular processes underlying RGC damage in aging eyes are poorly understood. While no single defect defines aging, several lines of evidence suggest that activation of senescence is a vital contributor (He & Sharpless, 2017). In a mouse model of glaucoma/ischemic stress, we reported the effects of p16Ink4a on RGC death (Skowronska-Krawczyk et al., 2015). Upon increased IOP, the expression of p16Ink4a was elevated, and this led to enhanced senescence in RGCs and their death. Such changes most likely cause further RGC death and directly cause loss of vision. In addition, the analysis of p16KO mice suggested that lack of p16Ink4a gene protected RGCs from cell death caused by elevated IOP (Skowronska-Krawczyk et al., 2015). Importantly, elevated expression of p16INK4a and senescence were both detected in human glaucomatous eyes (Skowronska-Krawczyk et al., 2015). Therefore, for the first time, p16Ink4a was implicated as a downstream integrator of diverse signals causing RGC aging and death, both characteristics changes in the pathogenesis of glaucoma. Our findings were further supported by a subsequent report showing that p16Ink4a was upregulated by TANK binding kinase 1 (TBK1) a key regulator of neuroinflammation, immunity, and autophagy activity. TBK also caused RGC death in ischemic retina injury (Li, Zhao, & Zhang, 2017). Of particular note, a recent bioinformatic meta-analysis of a published set of genes associated with primary open-angle glaucoma (POAG) pointed at senescence and inflammation as key factors in RGC degeneration in glaucoma (Danford et al., 2017).
Glaucoma remains relatively asymptomatic until it is severe, and the number of affected individuals is much higher than the number diagnosed. Numerous clinical studies have shown that lowering IOP slows the disease progression (Boland et al., 2013;Sihota, Angmo, Ramaswamy, & Dada, 2018). However, RGC and optic nerve damage are not halted despite lowered IOP, and deterioration of vision progresses in most treated patients. This suggests the possibility that an independent damaging agent or process persists even after the original insult (elevated IOP) has been ameliorated.
We hypothesized that early removal of senescent RGCs that secrete senescent associated secretory proteins (SASP) could protect remaining RGCs from senescence and death induced by IOP elevation. To test this hypothesis, we used an established transgenic p16-3MR mouse model (Demaria et al., 2014) in which the systemic administration of the small molecule ganciclovir (GCV) selectively kills p16INK4a-expressing cells. We show that the early removal of p16Ink4 + cells has a strong protective effect on RGC survival and visual function. We confirm the efficiency of the method by showing the reduced level of p16INK4a expression and lower number F I G U R E 1 Removal of early senescent cells has a neuroprotective effect on RGCs. (a) Schematic representation of the p16-3MR transgene. Triple fusion of luciferase, the red fluorescent protein, and tyrosine kinase from HSV virus are under control of the regulatory region of p16Ink4a gene. (b) Plan of the experiment. After unilateral IOP elevation mice are daily injected with GCV (25 mg/kg) intraperitoneally. At day 5 VEP is measured, and tissue is collected for further experiments. (c) Representative images of retina flat-mount immunohistochemistry at day five with anti-Brn3a antibody specifically labeling ~80% of RGC cells. (d) Quantification of RGC number at day five after the treatment of WT animals. N ≥ 5 animals in each group (e) Quantification of RGC number at day five after the treatment of p163MR animals. N = 8 animals in each group. In d and e, statistical tests were performed using ANOVA with post hoc Tukey correction for multiple testing. *p < .05, **p < .01, ***p < .001, n.s., not significant of senescent β-galactosidase-positive cells after GCV treatment.
Finally, we show that treatment of p16-3MR mice with a known senolytic drug (dasatinib) has a similar protective effect on RGCs as compared to GCV treatment in p16-3MR mice.
| Animals
All animal experiments were approved by the UC San Diego
| Drug treatment
The p16-3MR transgenic model (Figure 1a), in which the mice carry a trimodal reporter protein (3MR) under the control of p16 regulatory region (Demaria et al., 2014)
| Visual evoked potential
VEP measurements were taken at five days post-IOP elevation.
This protocol was adapted from prior studies (Ridder & Nusinowitz, 2006). Mice were dark-adapted for at least 12 hr before the procedure. Animals were anesthetized with ketamine/xylazine and their eyes dilated as above. The top of the mouse's head was cleaned with an antiseptic solution. A scalpel was used to incise the scalp skin, and a metal electrode was inserted into the primary visual cortex through the skull, 0.8 mm deep from the cranial surface, 2.3 mm lateral to the lambda. A platinum subdermal needle (Grass Telefactor) was inserted through the animal's mouth as a reference and through the tail as ground. The measurements commenced when the baseline waveform became stable, 10-15 s after attaching the electrodes.
Flashes of light at 2 log cd.s/m 2 were delivered through a full-field Ganzfeld bowl at 2 Hz. Signal was amplified, digitally processed by the software (Veris Instruments), then exported, and peak-to-peak responses were analyzed in Excel (Microsoft). To isolate VEP of the measured eye from the crossed signal originating in the contralateral eye, a black aluminum foil eyepatch was placed over the eye not undergoing measurement. For each eye, peak-to-peak response amplitude of the major component P1-N1 in IOP eyes was compared to that of their contralateral non-IOP controls. Following the readings, the animals were euthanized, and their eyes collected and processed for immunohistological analysis.
| Immunohistochemistry
Following euthanasia, eyes were enucleated and fixed in 4% paraformaldehyde (PFA) in PBS (Affymetrix) for 1 hr and subsequently transferred to PBS. The eyes were then dissected, the retinas flatmounted on microscope slides, and immunostained using a standard sandwich assay with anti-Brn3a antibodies (Millipore, MAB1595) and secondary AlexaFluor 555 anti-mouse (Invitrogen, A32727).
Mounted samples (Fluoromount, Southern Biotech 0100-01) were imaged in the fluorescent microscope at 20x magnification (Biorevo BZ-X700, Keyence), focusing on the central retina surrounding the optic nerve. Overall damage and retina morphology were also taken into consideration for optimal assessment of the retina integrity. Micrographs were quantified using manufacturer software for Brn3a-positive cells in 6 independent 350 × 350 µm areas per flat mount.
| Real-time PCR
Total RNA extraction from mouse tissues, cDNA synthesis, and RT-qPCR experiments were performed as previously described (Skowronska-Krawczyk et al., 2015). Assays were performed in triplicate. Relative mRNA levels were calculated by normalizing results using GAPDH. The primers used for RT-qPCR are the same as in (Skowronska-Krawczyk et al., 2015). The differences in quantitative PCR data were analyzed with an independent two-sample t test.
| SA-βgal assay to test senescence on retinas mouse eyes
Senescence assays were performed using the Senescence b-Galactosidase Staining Kit (Cell Signaling) according to the manufacturer's protocol. Images were acquired using a Hamamatsu Nanozoomer 2.0HT Slide Scanner and quantified in independent images of 0.1 mm 2 covering the areas of interest using Keyence software.
| RE SULTS
Intraocular pressure was increased in one eye of transgenic mice bearing the p16-3MR construct ( Figure 1a). After IOP elevation, mice were intraperitoneally injected with GCV for five consecutive days ( Figure 1b) to specifically deplete p16Ink4a-positive (p16 + ) cells.
In parallel, wild-type animals were subjected to the same protocol, that is, underwent five daily GCV injections after unilateral IOP elevation. Retina flat-mount immunohistochemistry and RGC quantification were used to assess potential impact of drug treatment. We observed that five-day administration of GCV after IOP elevation Next, to test whether the protection of RGC numbers in GCVtreated retinas was accompanied by the protection of the visual circuit integrity on day five, the in vivo signal transmission from the retina to the primary visual cortex was assessed by measuring visual evoked potentials (VEP) (Figure 2a) (Bui & Fortune, 2004;Porciatti, 2015). In brief, the reading electrode was placed in the striate visual cortex, with the reference electrode in the animal's mouth and ground electrode in the tail. Flash stimuli were presented in a Ganzfeld bowl. Response amplitudes were quantified from the peakto-peak analysis of the first negative component N1. Using this approach, we have found that eyes subjected to IOP elevation showed decreased VEP P1-N1 amplitude (Figure 2b), compared to contralateral non-IOP control eyes. However, there was a marked rescue of VEP signals in transgenic animals treated with GCV ( Figure 2b).
Further quantification showed significant vision rescue upon GCV treatment only in p16-3MR and not WT animals ( Figure 2c,2), confirming the specificity of GCV treatment.
Our previous studies indicated that the increase in p16INK4a expression could be first observed as early as day three post-IOP elevation (Skowronska-Krawczyk et al., 2015). Therefore, we chose this time-point to analyze the effectiveness of GCV treatment on senescent cells in treated and control retinas of p16-3MR animals. RGC quantification showed that in animals not injected with GCV only ~15%-20% of cells disappeared at day 3 (compared to ~45%-50% on day 5).
To test whether GCV treatment indeed removed senescent cells in the retina, we used two approaches. First, we quantified the (Figure 3a). Day 3 also corresponds to the highest F I G U R E 3 Senescence is lowered upon GCV treatment ~2 days before the effects on RGC numbers are observed. (a) At day 3 after IOP, only 20% of RGCs are lost compared to the non-treated eye. Similar numbers of cells are lost in GCV-treated eyes at this stage. N = 3 (non-GCV) and N = 5 (GCV), ANOVA, *p < .05, **p < .01, n.s. -not significant (b) p16Ink4a expression is significantly lower in affected retinas isolated from GCVtreated p16-3MR animals at day 3 after IOP elevation. t-test, **p < .01 (c) Number of SA-b-gal positive cells is lowered upon GCV treatment. Blue arrow -remaining senescent cell (d). Quantification of number of senescent cells upon IOP elevation in retinas isolated from mouse treated and non-treated with GCV. N = 4 (non-GCV), N = 6 (GCV); ANOVA, **p < .01 To inquire in an unbiased way about the differences in signaling pathways and cellular processes affected by IOP, GO analysis using PANTHER was performed (Mi, Muruganujan, Ebert, Huang, & Thomas, 2019). This approach revealed that processes of the immune system response, inflammation, and extracellular matrix composition and cell-matrix interaction were significantly changed in IOP samples (Table 1). We have also detected the significantly deregulated genes involved in apoptosis, microglial activation and interlukin-6 and interlukin-8 production and secretion. This analysis shows that many mechanisms are induced upon an acute IOP elevation, most probably causing additional transcriptional stress to cell.
Further analysis revealed that the genes involved in cellular senescence, extracellular matrix molecules and in factors involved in apoptosis (Table 2) (Pawlikowski et al., 2013) were significantly de regulated upon IOP elevation. Importantly, 3-day treatment to remove p16 + cells significantly mitigated this response (Figure 4c).
These data are in agreement with the loss of the senescence cells upon GCV treatment ( Figure 3b3) and lower detrimental impact of senescent cells on surrounding cells.
Additional GO analysis of the 617 genes which were significantly de regulated upon IOP elevation specifically in non treated retinas (i.e., genes where the effects of IOP were dampened by GCVmediated removal of senescent cells) (Figure 4d) identified a specific enrichment of a class of genes belonging to the ABL1 pathway and ABL1 downstream targets (Fig. S1). Prompted by this finding, we explored whether dasatinib, a well-known senolytic drug and a Bcr-Abl and Src family threonine kinase inhibitor, could have a beneficial effect similar to GCV in p16-3MR mice. To this end, p16-3MR mice were treated with dasatinib (5 mg/kg) or vehicle for 5 days by intraperitoneal injection, similarly to the experimental procedure used for GCV ( Figure 1b). Performing this experiment in the transgenic mice allowed direct comparison of the efficiencies of both treatments in the same mouse strain. At day five after IOP elevation, VEP measurement was performed and retinas were immunostained to quantify RGC loss. We observed that dasatinib treatment prevented the loss of RGC (Figure 5c) similar to what was observed in GCV-treated animals ( Figure 1e). Most importantly, VEP analysis revealed that senolytic drug treatment successfully prevented vision loss upon IOP elevation ( Figure 5d).
Finally, we explored whether the protective impact of the drug is caused by the sustained inhibition of the cellular processes and whether it is maintained even after the drug is no longer present.
To do that, p16-3MR mice were treated with dasatinib (5 mg/kg) or vehicle for 5 days by intraperitoneal injection, similarly to the experimental procedure used for GCV (Figure 1b). After that, the mice were no longer treated with drug or PBS and at day twelve after IOP elevation, functional measurement was performed and RGCs were quantified. Also in this treatment regime, dasatinib prevented the loss of RGC (Figure 5c) similar to what was observed in GCV-treated animals ( Figure 1e). Additionally, VEP analysis revealed that senolytic drug treatment with seven days "chase" still successfully prevented vision loss upon IOP elevation (Figure 5d).
| D ISCUSS I ON
The collective findings of the current study strongly support the notion that removal of senescent cells provides beneficial protective Dasatinib is a selective tyrosine kinase receptor inhibitor that is commonly used in the therapy of chronic myelogenous leukemia (CML). Other studies have shown that treatment with dasatinib is effective in destroying senescent fat cell precursors .
Our RNA-seq data pointed to this senolytic drug as a potential candidate for in vivo treatment of retinal damage induced by IOP elevation. Notably, we found that the level of RGC protection resembles the one obtained with GCV treatment of p16-3MR transgenic line.
Based on these findings, we conclude that dasatinib treatment resulted in RGC protection through removal of senescent cells. It will be of interest to further investigate the possible therapeutic effects of other senolytic drugs in glaucoma and glaucoma models.
The gene encoding p16INK4a, CDKN2A, lies within the INK4/ ARF tumor suppressor locus on human chromosome 9p21; this is the most significant region to be identified as having an association with POAG in different population samples (Ng, Casson, Burdon, & Craig, 2014). Although the molecular mechanism of many of these associations is yet to be described, we have shown that one of them Akt-Bmi1 phosphorylation pathway (Li et al., 2017). Given the complexity of the 9p21 locus, we believe that there are more pathways involved in p16Ink4a regulation and further work is needed to understand the role of p16Ink4a as a integrator of these signals especially upon IOP elevation.
Several collaborative efforts identified numerous SNPs localized within the 9p21 locus to be highly associated with the risk of open-angle glaucoma including normal-tension glaucoma (NTG), a glaucomatous optic neuropathy not associated with elevated IOP (Killer & Pircher, 2018;Wiggs & Pasquale, 2017). Intriguingly, one of the top variants associated with the risk of NTG is located in the gene TBK1, a factor that has been recently shown to be implicated in upregulation of p16ink4a gene (Li et al., 2017). Finally, recent studies have also revealed that specific methylation patterns in the 9p21 locus are strongly associated with the risk of NTG glaucoma (Burdon, 2018). It is notable that the positions of most, if not all, of these SNPs and methylation markers overlap with active regulatory regions within the locus identified by ENCODE (Consortium, 2012).
Although regulation of the 9p21 locus in the context of many diseases and aging is under extensive investigation, it still remains to be explicitly addressed in relation to glaucoma.
Another major type of glaucomatous optic neuropathy is angle closure glaucoma (ACG), a condition characterized by blockage of the drainage angle of the eye. To date, there is no study reporting F I G U R E 5 Dasatinib protects retina degeneration. (a) Plan of the experiment. After unilateral IOP elevation, mice are daily injected with dasatinib (5 mg/kg) intraperitoneally. At day 5, VEP is measured and tissue is collected for further experiments. Immunohistochemistry of Brn3a and activated caspase show increase of apoptosis at day 3 after IOP treatment. (b) Retina flat-mount immunohistochemistry at day 5 with anti-Brn3a antibody specifically labeling ~80% of RGC cells. (c,d) Quantification of RGC number (c) or VEP responses (d) at day 5 (four conditions) or day 12 (additional 7 days of "recovery," two conditions) after the 5 days treatment of p16-3MR animals with dasatinib. N > 4 animals in each group. Statistical tests were performed using ANOVA with post hoc Tukey correction for multiple testing. *p < .05, ***p < .001, n.s. -not significant (e). Model. Top: Upon elevated IOP damaged cells become senescent and start to express SASP molecules. While disease progresses, the SASP molecule induces senescence or apoptosis in neighboring cells. Bottom: When senescent cells are removed using senolytic drug the neighboring cells are not exposed to detrimental SASPs and the disease progression is significantly slowed down. Remaining cells are healthy genetic variants or methylation markers in the 9p21 locus significantly associated with the risk of ACG despite several studies implicating various molecular mechanisms (Evangelho, Mogilevskaya, Losada-Barragan, & Vargas-Sanchez, 2019). Nevertheless, the fact that progressive vision loss is observed in PACG patients, even after lowering the IOP (Brubaker, 1996), raises the question whether an association could be observed between 9p21 markers and the progression rather than the risk of the disease. Further studies to unravel such associations are necessary.
Markers of cellular senescence such as expression of the p16Ink4a and SASP molecules dramatically increase during aging in both humans and mice. Several studies suggest that p16Ink4a + cells act to shorten healthy lifespan by promoting age-dependent changes that functionally impair tissues and organs Jeon et al., 2017;Krishnamurthy et al., 2006). Intriguingly, a recent explosion of studies has shown that removal of senescent cells using senolytic drugs in progeroid (accelerated aging phenotype) and healthy mice induces lifespan extension and improves the health of animals (Baker et al., 2011Scudellari, 2017;Xu et al., 2018). Our studies suggest a potential use of such therapy to reduce glaucoma associated blindness, either as a stand-alone treatment or together with IOP-lowering therapies.
ACK N OWLED G M ENTS
We thank Sherrina Patel for help with this project. This work was
CO N FLI C T O F I NTE R E S T
Nothing to declare.
DATA AVA I L A B I L I T Y S TAT E M E N T
The data that support the findings of this study are openly available in GEO database (GSE141725) and in Dryad at https ://doi. org/10.6075/J0707ZTM (Rocha, 2019). | 2019-07-26T08:07:58.563Z | 2019-06-27T00:00:00.000 | {
"year": 2019,
"sha1": "c43a5e677e976059a2d4613c5aa5247cd5b6df3a",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/acel.13089",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b942095cdc1ff92557107d7cfe10cef9168a1cd2",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
222378319 | pes2o/s2orc | v3-fos-license | Promise and Challenges of a Data-Driven Approach for Battery Lifetime Prognostics
Recent data-driven approaches have shown great potential in early prediction of battery cycle life by utilizing features from the discharge voltage curve. However, these studies caution that data-driven approaches must be combined with specific design of experiments in order to limit the range of aging conditions, since the expected life of Li-ion batteries is a complex function of various aging factors. In this work, we investigate the performance of the data-driven approach for battery lifetime prognostics with Li-ion batteries cycled under a variety of aging conditions, in order to determine when the data-driven approach can successfully be applied. Results show a correlation between the variance of the discharge capacity difference and the end-of-life for cells aged under a wide range of charge/discharge C-rates and operating temperatures. This holds despite the different conditions being used not only to cycle the batteries but also to obtain the features: the features are calculated directly from cycling data without separate slow characterization cycles at a controlled temperature. However, the correlation weakens considerably when the voltage data window for feature extraction is reduced, or when features from the charge voltage curve instead of discharge are used. As deep constant-current discharges rarely happen in practice, this imposes new challenges for applying this method in a real-world system.
I. INTRODUCTION
The proliferation of Li-ion batteries is underway with a shift to electric vehicles and an increasing demand for commercial and residential energy storage systems. A key challenge is the prediction of cycle life under various usage patterns and operating temperatures. In particular, predictions that use only early-cycle data, without long historical information, can open a new chapter in battery design, production, and usage optimization [1]. However, early prediction is typically challenging because of the non-linearity of battery degradation. For instance, a weak correlation (ρ=0.1) was found in [2] between remaining capacity at cycle 80 and capacity values at cycle 500.
Approaches for prediction of the future degradation and cycle life of Li-ion batteries can be classified into three general categories: physics-based modeling of the main degradation mechanisms [3], [4], phenomenological modeling of capacity fade or internal resistance increase [5], [6], and recently data-driven machine learning methods [7]- [9].
The physics-based degradation modeling approach uses partial differential equations (PDEs) to quantify the physical and chemical effects occurring inside a battery, such as Distribution A. Approved for public release; distribution unlimited. (OPSEC 4603) 1 Department of Mechanical Engineering, University of Michigan, Ann Arbor † vsulzer@umich.edu ion diffusion and electrochemical reactions. This can then be used to explain input and output relationships, such as between input current profile and resulting capacity fade or resistance increase. This approach gives very detailed information about the processes occurring inside the battery, which can be used not only to predict degradation but also mitigate it through battery design and management. However, the complexity of the models (several PDEs) and relative paucity of available data (current, voltage and temperature data) makes it hard to verify the accuracy, both of the model itself and of its parameterization.
The phenomenological modeling approach simplifies the prediction of degradation of Li-ion batteries by focusing on changes in the specific measures of degradation, such as internal resistance or cell capacity, with respect to cycle number or Amp-hour (Ah) throughput. For example, Goebel et al. [5] use an exponential growth model as a degradation model to capture the increasing trend of internal resistance of the battery with respect to time in weeks. In this approach, parameters of the empirical degradation model (e.g. exponential growth model) are estimated from historical/previous data, and future trend is extrapolated at the time of making a prediction. This approach is appealing due to its simplicity, but it can fail to account for the complexity of Li-ion battery degradation, which usually depends on more than just time and cycle number, and entirely ignores the rich data available from the voltage curve. Furthermore, since identifying the degradation model parameter depends on the available previous data, it often requires long historical data (at least 25% along the trajectory to end-of-life [10]) to forecast future trends accurately.
Recently, Severson et al. [9] demonstrated a third approach that can be used to make predictions using only early-cycle data without the need for complex electrochemical models by showing that carefully extracted features from the discharge voltage curves in early (first 100) cycles can be used to predict a battery's cycle life, ranging from 150 to 2300 cycles, with 9.1% test error. In their study, a dataset of 124 LFP/graphite A123 cells was generated from different fast charging rates and their charging profiles. The cells were cycled between full charge and full discharge with identical nominal temperatures and discharge C-rates, but varying charge C-rates.
In this work, we construct a similar dataset of 12 NMC/graphite cells cycled to failure (as described in Section II) and probe the feasibility of this particular data-driven approach. Firstly, in Section III, we investigate whether the same features from the discharge voltage curves in early cycles show good correlation with cycle life, even for a different chemistry and with a broader range of operating temperatures and discharge C-rates, and show that end-oflife can be predicted with 16% error.
Secondly, in Section IV, we explore whether the approach can be pushed further, beyond using full, constant-current discharges. To do this, we consider two scenarios in which full, constant-current discharge data could be unavailable. In the first scenario, constant-current discharge data is available but only for a partial data window. We examine how large the partial data window must be in order to provide adequate results. In the second scenario, no constant-current discharge data is available, only constant-current charge data. This scenario more closely mirrors a real-life situation in which discharges occur in response to user demand (for example, driving an electric vehicle), but charges can be more closely controlled. In this case, we investigate whether features in the charge voltage data can be used to predict cycle life.
II. DATA GENERATION A. Experimental method
To study the degradation under a variety of conditions, 12 identical 5 Ah NMC/graphite pouch cells were selected from a batch manufactured at the University of Michigan Battery Lab (UMBL), with specifications shown in Table I. Initial formation cycles were performed after manufacture to ensure the safety and performance stability of the cells. Then the cells were assembled inside fixtures. The dynamic tests were performed using a battery cycler (Biologic, France). The fixtures were installed inside a climate chamber (Cincinnati Ind., USA) in order to control the temperature during cycling and the characterization tests. The temperature was measured using a K-type thermocouple (Omega, USA) place on the surface of the battery.
The cycle aging experiments were designed to cover an array of test conditions such as different charge/discharge Crates, and different nominal temperatures. The test conditions range from low C-rate room temperature baseline aging to high C-rate hot temperature accelerated aging, and are sumarized in Table II. Each of the test conditions was performed at three different temperatures: hot (45 • C), cold (−5 • C), and room temperature (25 • C). Before the start of the cycling the cells were held at the target temperature for 3 hours to ensure thermal equilibrium. The cycling consists of a constant current (CC) charge until reaching 4.2 V, followed by a constant voltage (CV) phase at 4.2 V until the current falls below C/50, and finally CC discharge until reaching 3.0 V.
Intermediate diagnostic (C/20) tests were performed at a periodic number of cycles corresponding to an expected 5% capacity loss. For these diagnostic tests, the cells were brought back to the room temperature (25 • C) and held at rest for 3 hours to ensure thermal equilibrium. The diagnostic test consists of an initial C/5 discharge until reaching 3.0 V, followed by a constant voltage (CV) phase at 3.0 V until the charge current falls below C/50 and 1 hour rest to ensure the cell is fully discharged. This is followed by a C/20 charge until reaching 4.2 V, then a constant voltage (CV) phase at 4.2 V until until the charge current falls below C/50 and 1 hour rest. Finally, the cell is discharged at C/20 until reaching 3.0 V.
B. Discharge capacity data
The discharge capacity curves of the 12 cells cycled to full depth-of-discharge are shown in Figure 1a. We use the discharge capacity as measured by the diagnostic tests, since the cells do not reach the capacity limits during cycling due to high internal resistance, especially at cold temperatures and/or high C-rates.
Discharge capacity is plotted as a function of Ahthroughput, rather than cycle number, because cells charged and discharged between fixed voltage limits at different temperatures and C-rates observe different Ah-throughput per cycle. For these 5 Ah cells, an upper bound for one equivalent cycle is 10 Ah of throughput (including charge and discharge). We define "Ah-throughput life" as the Ahthroughput at which 80% of initial capacity was reached; this is the equivalent of cycle life when measuring by cycle number.
The wide variety of operating conditions gives rise to a wide range of Ah-throughput lives for the cells, ranging from 1000 Ah to 3000 Ah. Unlike the LFP/graphite cells used by Severson et al. [9], the NMC/graphite cells used here do exhibit significant capacity loss during early cycling. Figure 1c shows that the percent capacity remaining after the first 200 Ah of throughput does correlate with Ah-throughput life, but only weakly (Pearson correlation coefficient of 0.50). A similar weak correlation between early capacity loss and lifetime capacity loss has previously been reported in the literature [2]. The correlation of the early percentage capacity loss with Ah-throughput life provides a baseline benchmark for features in the discharge voltage curve.
On average, the cells cycled at hot temperature degraded the fastest, followed by the cells cycled at cold temperature, and the cells cycled at room temperature degraded the slowest. Meanwhile, higher charge and discharge rates led to faster degradation on average. While these observations hold in an averaged sense, there were also significant outliers. For example, cell 1, cycled at low C-rate, and cell 12, cycled at room temperature, were among the fastest degrading cells. The fact that averaged trends were as expected but with significant outliers suggests that differences in degradation rate were driven by a mix of operating variations and manufacturing variations. It should be notes that the cells used in these experiments were designed as energy cells, rather than power cells, hence the poor performance at these relatively high C-rates.
A. Features extraction from discharge voltage curves
Charge and discharge voltage curves contain much more information than simply providing the capacity of the cell. Typically, state-of-health-related information is extracted from either a charge/discharge voltage curve or its derivative with respect to discharge capacity (differential voltage analysis), on a cycle-by-cycle basis. For example, electrochemical models can be parametrized using voltage curves, or features can be directly extracted from them [11], [12].
Severson et al. [9] propose a new approach, comparing two discharge voltage curves from different cycles, shown here in Figure 2a. By inverting the discharge voltage curve to find the discharge capacity as a function of voltage, and then taking the difference in discharge capacities between cycles, we obtain discharge capacity difference Here, we define Q x to be the discharge capacity as a function of voltage for the cycle in which x Ah throughput was reached. The typical convention is to use x > y, so that ∆Q x−y is negative, but this is not strictly necessary. All high and low combinations of Ah throughput values are systematically tested (Figure 3), restricting to below 200 Ah to verify the early-prediction capability. From this analysis, we find that the values of 10 Ah and 190 Ah give the best log-correlation between the variance of ∆Q and the Ahthroughput life. In general, the correlation is stronger when using cycles with a larger difference between high and low values for the Ah throughput, which gives further confidence that the variance of the ∆Q curve is capturing physically relevant effects. Figure 2b shows ∆Q 190−10 for all 12 cells, and Figure 2c shows the variance of each of these curves plotted against Ah-throughput life on a log-log axis, with a strong negative correlation (correlation of -0.90). Other statistics of the ∆Q 190−10 curve, such as the minimum and mean values, also show very good correlation, while the skew and kurtosis show poor correlation (see Table III).
This excellent correlation holds despite the wide range of operating temperatures and charge/discharge C-rates that were used not only to cycle the cells but also to obtain the ∆Q 190−10 curve (the ∆Q 190−10 curve is calculated directly from cycling data, rather than from separate low C-rate characterization cycles at room temperature).
B. Linear regression model for prognostics
To demonstrate the effectiveness of using features from the discharge voltage curve for battery lifetime prognostics, we use some of the data to train a simple machine learning algorithm, then evaluate its predictive power on a test set.
We restrict ourselves to simple regularized linear regres-sions as the small size of the dataset could easily lead to over-fitting if using more advanced algorithms. The focus of this work is to better understand the features themselves, and what are the scenarios in which they can be useful for prognostics, rather than optimization of the prognostics algorithm itself. For larger datasets, more advanced algorithms such as the Relevance Vector Machine have been shown to be promising for this type of application [13]. Based on the correlations in Table III, and for simplicity given the small amount of available data, we use the logarithms of variance, mean, and minimum of ∆Q 190−10 as features, and the log of Ah-throughput life as the objective. Note that final errors are reported for the Ah-throughput life (not its logarithm).
We use a regularized linear regression model, the Ridge regression model. With this algorithm, we find a vector of weights w * that minimizes the cost function where y is the vector of data and X is the matrix of features. The hyperparamer α was tuned for optimal results, to a value of 8. We choose the ridge regression model because the model only has three features, all of which are known to correlate strongly with the objective, so L1regularizations such as LASSO regression are not necessary (L1-regularization is useful when trying to obtain sparse models where some of the weights are set to zero). We implement the ridge regression using the Python packages numpy [14], pandas [15], [16], and scikit-learn [17]. We randomly split the data into training and testing sets with an 8/4 ratio. Averaging over 100 such random splits, the average training RMSE is 269 Ah (13% MPE) and the average testing RMSE is 335 Ah (16% MPE). This can be compared to the average error from a simple regression to the mean, which gives an average training RMSE of 539 Ah (29%) and an average testing RMSE of 617 Ah (35%). Therefore, using features from the discharge voltage curve approximately halves the error of lifetime prediction. In Figure 4, we show cell-by-cell predictions for a single representative train/test split, which gives a training RMSE of 211 Ah (12% MPE) and a testing RMSE of 277 Ah (14% MPE).
A. Performance using discharge voltage features
The strong correlation of the features obtained from the discharge voltage curve verifies the capability of the presented data-driven approaches even for different cell chemistry (LFP/graphite in [9] and NMC/graphite in this study), a wide range of operating temperatures, and various C-rates for charge/discharge cycling.
In the case of these NMC/graphite cells, the degradation mechanism leading to these features is also visible in the discharge capacity curve even after 200 Ah of cycling, but using the feature from the discharge voltage curve gives superior predictive power. Using differential voltage analysis on the characterization tests, we can better understand what are the degradation mechanisms occurring in these cells, and hence rationalize the predictive power of the discharge voltage features. For each characterization cycle (C/20 discharges at room temperature), we use the algorithm of Mohtat et al. [18] to determine which form of degradation has occurred: loss of lithium inventory (LLI), or loss of active material (LAM) in the negative or positive electrode. At the same time, we calculate var(∆Q) for that cycle by taking the difference in the discharge voltage curve with the first characterization cycle for that cell. This allows us to build a map between var(∆Q) and LLI, LAM n , and LAM p , shown in Figure 5. These results show that var(∆Q) tracks LLI in exactly the same way at all C-rates and temperatures, which may be why var(∆Q) is such a good predictor of end-of-life. Note that there may be other forms of degradation, such as SEI formation, lithium plating, or particle cracking, that also contribute to var(∆Q) but are not captured by LLI or LAM in the characterization cycles.
B. Effect of partial data window
In a real-life system, full discharges rarely occur, and so typically only partial discharge data is available. We investigate whether this method is still useful in this case by reducing the size of the window used for the calculation of ∆Q 190−10 , and plotting the resulting correlation of its variance with Ah-throughput life ( Figure 6). There is a very rapid drop-off in the log-correlation between variance and Ah-throughput life as soon as the depth of discharge is reduced from 100% (i.e. whenever the final voltage dropoff in Figure 1a is not captured in ∆Q). Other features (minimum, mean) in the ∆Q 190−10 curve show similar correlation with Ah-throughput life when reducing the size of the data window. Figure 6b shows the voltage and differential voltage (dV/dQ) for a representative cell as a function of depth of discharge. This shows that it is important to capture the final voltage drop-off, between 90 and 100% depth of discharge, to get the best correlation. Furthermore, the correlation decreases further when the cell is not discharged below the peak in dV/dQ at 40% depth of discharge.
These results suggest that, for NMC/graphite cells, this data-driven approach is only useful if full discharge data is available. This effect is likely to also be significant for LFP/graphite cells used in [9], since the portion of the voltage curve that produces the ∆Q feature is only reached at around 90% depth of discharge.
C. Performance using charge voltage features
In a real-life system, constant-current discharge voltage data is unlikely to be available since the cell's discharge current must continuously change in order to meet the user's power requirements. Constant-current charge is more likely to be available, since CCCV (constant current, constant voltage) charging is the industry standard.
Considering this scenario, we investigate whether features from the constant-current charge voltage could instead be used for lifetime prediction. The approach is identical to that used for features for the discharge difference curves: we take the difference of two constant-current charge curves, one after 10 Ah of throughput and one after 190 Ah of throughput, and then take various statistics of this curve. Figure 7 shows this process, with Figure 7c showing the correlation of the logarithms of ∆Q c 190−10 and Ah-througput life, with only weak negative correlation. The log-correlations of other statistics for the charge voltage difference curve are similarly weak (Table IV). Hence the early-cycle prognostics algorithm does not perform as well when using features from the charge curves instead of the discharge curve.
One possible explanation for this reduced performance could be that some of the ∆Q c 190−10 curves, shown in Figure 7, are missing the lower voltage range: as the cells degrade, the jump in voltage at the beginning of charge becomes much more significant, whereas this is not the case for discharge. Hence the data window available for calculating the feature is reduced, and so the accuracy is reduced, as discussed above.
V. CONCLUSIONS
This work verifies the capability of the data-driven prognostics in early-prediction of Li-ion battery lifetime using features from the discharge capacity difference curve in the case where the discharge C-rates and operating temperatures vary. This suggests that the data-driven approach is very promising in the case where constant-current discharge data can be deliberately generated, even if the operating temperatures and discharge currents vary.
However, we also found that the suggested feature (the variance of discharge capacity difference) loses the strong correlation when using either partial discharge voltage data, or constant-current charge data (for example, as part of CCCV charging). In those cases, different features should be found that have better predictive power. In the case where full constant-current discharges are not available, one promising data-driven alternative is to use the constant-voltage phase of CCCV charging, since this data will almost always be available even in the cases of partial cycling and varyingcurrent discharge. In particular, the time constant of the exponentially decaying current during the CV phase has been shown to be a good predictor of state-of-health [19]- [21], and therefore may also be a good predictor of end-of-life.
This study opens the question of whether the range of degradation rates in the data is caused by the different operating conditions or manufacturing variability between the cells. Investigating how this data-driven approach performs in either extreme (low variability, a wide range of conditions or high variability, narrow range of conditions) requires specifically generated data and remains an interesting open question. The good performance of the algorithm for these NMC/graphite cells with reasonably high manufacturing variability (as the cells are used outside of their recommended conditions) suggests that the algorithm may be robust in both extremes.
ACKNOWLEDGMENTS
The experimental work in this material was supported by the Automotive Research Center (ARC) in accordance with Cooperative Agreement W56HZV-14-2-0001 U.S. Army CCDC GVSC. The authors would also like to thank the University of Michigan Battery Lab for providing the cells used to generate experimental data. Distribution A. Approved for public release; distribution unlimited (OPSEC 4603). | 2020-10-16T01:01:08.374Z | 2020-10-15T00:00:00.000 | {
"year": 2020,
"sha1": "be68b370822542cd4eaf932bcc65ee8e8b959553",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2010.07460",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "be68b370822542cd4eaf932bcc65ee8e8b959553",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science",
"Engineering"
]
} |
219001288 | pes2o/s2orc | v3-fos-license | CONSUMER BEHAVIOUR OF SENIORS ON THE COW’S MILK MARKET IN SLOVAKIA: SILVER PERSUADING TECHNIQUES
Seniors are usually perceived as an unattractive segment, mostly due to their limited spending power. In Slovakia, the number of seniors has continuously been increasing. The population has been growing older. In Europe, more than a quarter of the population is expected to be aged 65 years or older by 2050. That is the main reason why we have to understand the consumer behaviour and decision-making processes of senior consumers. The presented paper deals with the consumer behaviour of seniors on the Slovak market of cow’s milk since it is the most commonly consumed type of milk in Slovakia. Opinions of nutrition specialists differ on whether it is beneficial or not for humans to consume milk. However, in general, milk is considered to be an essential component of the diet not only for children but also for adults and especially for seniors because of its high nutrition value. Milk and dairy products should be a daily part of the seniors’ diet. Since older people no longer have the necessary enzyme (lactase) to break down milk sugar (lactose), it is recommended to consume milk products that no longer contain milk sugar, but that lactic acid is produced by fermentation. Sour milk products such as curd, yoghurt or kefir have a beneficial effect on stomach, intestines and also the immune system. Long-term insufficiency of calcium intake causes osteoporosis – a disease that manifests itself in bone loss and structural disorders. It leads to increased fracturing of the bones and thus an increased risk of health complications resulting from there. This study explores senior consumers’ preferences for milk and their decision-making strategies on the market of cow’s milk. The study is oriented primarily on visual cues catching the attention of consumers. Anonymous survey was conducted on a sample of 470 senior respondents (210 males and 260 females) aged 61 – 84. Using selected psychological tools and a short questionnaire it was found out that Slovak seniors prefer traditional motives and bright colours on the milk packaging, they highly prioritise price over quality of milk products and in comparison with young adults, they are loyal to chosen products or brands. Seniors who score higher on the scale of neuroticism personality trait state that the packaging of milk products is significant for their decisions. Seniors with higher emotional stability tend to experiment more on the market of milk.
Consumption of milk in Slovakia has a long tradition, and the industrial processing of milk has more than a century history. However, in the 21st century, the trend of consumption of drinking milk has a downward trend, excluding consumption of cheese, cottage cheese, sour milk products and butter (Kubicova, Kadekova, Dobak, 2014). Only a quarter of older people consume the recommended daily amount of dairy products (Belardi, 2015), although, they should eat at least one dairy product (yoghurt, sour milk, curd, low-fat cheese) every day, which is alarming. According to the latest findings, because of that, older people do not get the correct amount of calcium and protein (Hudecova, 2018).
Given the changing consumption patterns on milk market, it is increasingly essential for those in the dairy supply chain to understand consumer preferences in order to meet their rapidly evolving demands (The Dairy Alliance, 2019). The current situation on the market has put a consumer into the position of major decisive and leading element of a market (Kurajdova, Taborecka-Petrovicova, 2015). Unfortunately, seniors have been usually viewed as an unattractive market due to the perception that they had limited spending power. However, in the Slovak Republic, they are nowadays quite a significant segment, because in 2016 the share of the elderly (65 or over) among the total population was 14.4% (in comparison in 1996 it was just 10.9%) and this share is still growing, and it is expected to reach 25% 2050. The purchase behaviour of older consumers differs somewhat from that of their younger counterparts. Many authors (Moschis, 2003;Pettigrew et al., 2005;Petterson, 2007) have specified such differences, which include: expecting personal attention and special services, considering shopping to be asocial event, perceiving brand and retailer reputation, more extended time in purchase decision-making, increased store loyalty, etc. Given the range of differences noted, retailers need to give them serious consideration and use them to differentiate their services to different consumer segments.
Methodology and research methods. Anonymous survey was conducted on a sample of 2104 respondents of different age groups, but in our study, we work with a sample of 470 senior respondents (210 males and 260 females) aged 61 -84. The Chi-Square Test for Goodness of Fit was used to test whether this sample is representative: H0: a sample is a representative. H1: a sample is not representative Critical value (3,84) had a higher value than test characteristics (0,99). Hence the null hypothesis was accepted, and within 95% probability, this sample reflects the real gender distribution in Slovakia. All respondents have completed at least secondary education, and they live in Slovakia ( Figure 1). Three hundred ten respondents live in rural areas near bigger towns and cities; 160 respondents live directly in towns and cities. All respondents are retired, but 88 respondents work alongside retirement. Information about respondentsʹ income is stated in Table 1. Source: developed by the authors.
A questionnaire was constructed to achieve the research objectives. A questionnaire consists of several parts and takes into consideration possible physical, functional and sensory limitations of senior respondents. The first part represents the short personality inventory based on the Big Five Model (NEO FFI). Four personality traits were investigated -neuroticism (N), extroversion (E), openness to experience (O) and conscientiousness (C). The traits that constitute the five-factor model are extraversion, neuroticism, openness to experience, agreeableness, and conscientiousness. Extraversion sometimes referred to as surgency, is indicated by assertive, energetic, and gregarious behaviours. Neuroticism is substantially equivalent to emotional instability and can be seen in irritable and moody behaviours. Openness to experience sometimes referred to as intellect, indicates an individual's inquisitiveness, thoughtfulness, and propensity for intellectually challenging tasks. Agreeableness is indicated in empathic, sympathetic, and kind behaviours. Finally, conscientiousness refers to an individual's sense of responsibility and duty as well as foresight (Grice, 2019). The second part of the questionnaire represents a basic association experiment. Respondents were asked to write down words that occurred to them when they hear/see a word «milk» and words that occurred to them when they hear/see a word «packaging of milk». The third part consists of several statements connected to the consumption of milk and buying behaviour on the market of milk. Respondents were asked to express if they agree or disagree with mentioned statements on the 5-degree Likert-type scale. Last part of the questionnaire gathers demographic information about respondents.
Data was collected personally with the help of trained professional (psychologist) and via an online form with a detailed description. The dependences between psychological characteristics of respondents and their preferences were investigated by suitable statistical methods (Kruskal-Wallis one-way analysis of variance, ordinal logistic regression analysis and correlation analysis). We will determine the probability level -alpha (α = 0.05), which will be compared to the significance level (p-value). Based on alpha (α), we can evaluate the dependence with the p-value comparison. If the p-value is lower than alpha (α), we will refuse H0. If the p-value is higher than alpha (α), we will not refuse H0.
Results. Based on the short personality inventory, it was found out that 43% of respondents are not open to new experiences; they are rather conservative. 57% of respondents achieved high scores in openness to experience. 51.9% of respondents scored highly on the scale of neuroticism. Individuals who score high on neuroticism tend to be moodier and more often experience anxiety, sadness and similar negative feelings. A high incidence of higher neuroticism in the sample may be caused by deteriorating health state or social and environmental factors that are connected with higher age. Some research confirmed this assumption; for example, Barefoot et al. (2001), Lowe and Reynolds (2006), Chapman (2007). The team of Berafoot et al. (2001) argues that a higher assessment of depressive states was in women than in men. While women showed an increase in depressive symptoms from 50 to 60 years of age, men showed these symptoms from 50 to 80 years of age. The explanation is, therefore, evident, and the differences involved include changes in social roles and ageing. The second group of authors reaffirmed the results. Lowe and Reynolds (2006) examined two samples of respondents in their research. The first sample was 226 adult respondents, aged 60 and over. The second sample was 863 elderly responders at the same age. In both cases, time stability and construct validity were examined through which were confirmed various aspects of anxiety. In his survey, Chapman (2007) explored not only young adults but also middle-aged respondents and was finding out whether the critical responses were sufficient covariance among the selected items. In this survey, respondents also confirmed a higher value of neuroticism caused not only by social but also environmental factors.
68.5% of respondents claim that they are conscientious, and almost 46% of respondents are introverted. Representation of extroversion in the sample corresponds with the anticipated representation of this characteristic in the worldwide population. Researchers state that there are 50-75% of extroverts in the worldwide population, but all sources are inconclusive. No research would prove the exact worldwide distribution of extroversion. Only 378 of 470 respondents (80.4%) in the sample consume cow's milk ( Figure 2). Respondents who do not consume cow's milk suffer from different health problems (allergy, intolerance, other health issues) or they do not like the taste of milk. Respondents who do not consume cow's milk completed the adjusted association experiment but were excluded from further research.
Figure 2. Consumption of cow's milk by Slovak seniors
Source: developed by the authors.
The association experiment has shown the leading associations that consumers have with milk and milk packaging. The results are stated in Table 2.
Association with milk
Association with milk packaging
Answers of seniors
cow, white colour, health, cheese, childhood, semolina pudding, a specific brand of milk products use-by date, cows, white colour, blue colour, nature, grass, a glass of milk, cardboard box, traditional motives Source: developed by the authors.
The leading associations of Slovak senior consumers represent logical associations of the product's environment and qualities (cow, white colour, health) and presumable childhood memories. Childhood memories have proven to be an essential choice factor in the survey from Carvalho et al. (2016). This author mentions childhood memories as a significant factor in human impact. The results also suggested a significant influence of parental behaviour on respondents in adult age. Senior consumers also associate milk with specific brands that sell cow's milk on the Slovak market. It is the main reason that associations of consumers in the sample for «milk packaging» are probably biased by already existing packaging. The similar association experiment was conducted among young adults (Millennials) aged 25-42 from Nitra region (Rybanska et al., 2019) whereas the team of authors claims that the word «milk» is the first thing that comes to mind when speaking about the package. Not just the material itself in which the milk is sold, but also the graphic (colour) of the packaging. The results (for comparing) are stated in Table 3. It assumes that associations of seniors and Millennials are quite similar. Older adults seem to be more pragmatic; they associate the packaging of milk with typical packaging features and qualities (use-by date, cardboard box, traditions). Further research has shown that Slovak seniors prefer traditional motives and bright colours on the milk packaging. The importance of packaging in a selected group of the population was also investigated in the survey Ares et al. (2010). The team of authors concluded that the package (with the presence of the image) was the variable with the highest relative importance. The importance of this variable was high, suggesting that the package plays an essential role in the perception and purchase of food by consumers. Kruskal-Wallis one-way analysis of variance has shown (Figure 3) that there is a significant difference in the subjective importance of milk price among respondents with different monthly income (p = 0,001). The higher is the monthly income of respondents, the less important is the price of milk. The Slovak seniors have, in most cases, low income, so they highly prioritise price over quality of milk and milk products. Despite this result, almost 70% of respondents stated that they prioritise quality over the price. Further questioning has shown that in comparison with young adults, seniors tend to be more loyal to chosen products or brands. This result regarding the loyalty of seniors was also confirmed in the survey from Iver and Muncy (2005). The author points out that despite the perception of high parity among individual companies existing in the selected market, they can acquire and develop loyal customers. It proved no dependence between the income of respondents, and their tendency to buy milk based on packaging (Table 4) as the chi-square statistics is 5,237, and the p-value is 0,732, so the results were not significant at 5%. The packaging of products is very often called a «silent seller», and consumers very often do not see its importance on the conscious level. The design of the package was also dealt with by Rundh (2009)
and
Marketing and Management of Innovations, 2020, Issue 1 205 http://mmi.fem.sumdu.edu.ua/en Suchanek and Kralova (2018), who demonstrates the impact of packaging on both external and internal consumer factors in his study. According to his results, a well-defined package can increase the customer's interest in the goods or promote the brand on the market. Source: developed by the authors.
That is why the dependence between the importance of packaging and individual personality traits were investigated. Based on ordinal logistic regression analysis, it was found out ( Table 5) that neuroticism is a predictor of consumer behaviour when consumers buy milk based on its packaging. Seniors who are less emotionally stable (score higher on the scale of neuroticism) state that the packaging of milk products is essential for their decisions. This fact has also been confirmed in research from Hills and Argyle (2001). The authors that were mentioned have conducted an empirical study that was measured by the Oxford Happiness Inventory (OHI). They found out that the less emotional stability of the packages was more strongly associated with happiness than extraversion, and it represented more overall variability in multiple regression. Seniors with high emotional stability stated that the packaging was not so essential for them to more often. The similarity of the found result was also in the survey by Jang et al. (2005). In their study, the team of authors concluded that emotionally stable respondents did not attach importance to individual product packaging. The correlations among individual answers were found using the nonparametric correlation analysis. Respondents who consider milk to be a healthy beverage also state that the quality of milk is crucial for them (Spearman's rho= 0,244**) and that they have a favourite brand of milk, which they buy the most often (Spearman's rho = 0,192**).
Conclusions. The survey found that more than 80% of respondents consume cow's milk. The remaining percentage of asked people have different reasons why they do not consume this type of milk (health or personal reasons). Most respondents are controversial -they are not open to new opportunities and experiences. More than 51% of respondents have reached the range of neuroticism that causes moodiness and anxiety. The survey found out that older adults are more pragmatic, and they combine milk packaging with its typical packaging characteristics (e.g., sell-by-date). An important fact is that Slovak consumers prefer traditional motifs and transparent colours of milk packaging. Nearly 70% of seniors said they preferred quality over price. The importance of quality was confirmed by Valaskova, Kliestikova and Krizanova (2018), what may cause seniors to tend to be more loyal to selected products or brands. The survey confirmed that seniors are less emotionally stable, so the packaging of dairy products is essential in their decision making.
Based on the above mentioned, we can assume that the behaviour of the dairy market can affect the behaviour of manufacturers in this specific market. The way they perceive milk packaging can be influenced by the way consumers perceive milk. Also, packaging may affect their association with particular milk or other milk brands. Thus, we can say that milk packaging is essential for the perception of older adults and their subsequent behaviour in this market. | 2020-04-30T09:10:48.164Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "02d9adb67a9ff0f23205cee7e0d1c53ab8e6b7de",
"oa_license": "CCBY",
"oa_url": "https://essuir.sumdu.edu.ua/bitstream/123456789/77099/1/Krivosikova_mmi_2020_1.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "7b1c31e8bbd6ac33d72f1d1f0a0e7d168f567f8a",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Biology"
]
} |
226097131 | pes2o/s2orc | v3-fos-license | Development of BDMS Utilization Review Technology (BURT): An Artificial Intelligence Tool Using Thai Natural Language Processing to Assess Appropriateness of Hospitalization
OBJECTIVES: To develop an effective artificial intelligence (AI) driven platform to optimize the process of assessing appropriateness of hospitalization. MATERIALS AND METHODS: Anonymized data of 22,020 insured-patient admissions in a BDMS network hospital were included to build a prediction model based on a comprehensive guideline for appropriate hospitalization. To develop Thai Natural Language Processing (NLP) model, 77,707 sentences from medical records were used and separated into two datasets, 80% for training and 20% for testing. A combined NLP and rule-based algorithms formed an AI engine and outputs were displayed using a web-based application. An expert panel of five Utilization Management (UM) physicians had several collaborative discussions to fine tune the NLP model, application of clinical criteria, and classification engine. Eventually, NLP model in the latest version (BURT1.1), had satisfactory features with overall higher than 99% accuracy, precision, recall, and F1. RESULTS: Performance of BURT1.1 was assessed using 300 cases randomly selected from the main dataset, against other methods, including concurrent review by UM nurses at the participating hospital, and UM nurses at Bangkok Hospital Headquarters (BHQ). Agreement upon UM Physician Panel consensus was set as one of the performance indicators, and BURT1.1 showed a favorable outcome with the highest rate of agreement (86%) among all the methods. The precision rate was 99% as compared to insurance claim approval status. Additionally, dramatic time savings were achieved with 0.59 second of processing time as compared to 10-15 minutes per case by conventional manual review. CONCLUSION: BURT1.1 should be effectively implemented as an automatic daily tool to screen inappropriate hospitalization. It can immediately identify patients at high risk of inappropriate hospitalization that require further assessment by UM nurse, thus providing feedback to attending physicians on the completeness and quality of documentation, with parallel notification to UM physicians. Ultimately, BURT1.1 can contribute to increase UM efficiency, speeding up the claim process, reducing health care costs due to unnecessary hospitalization, and reduction of claim denials.
U tilization management (UM) has been effectively used as one of the approaches to reduce consumption of unnecessary healthcare services and thus helps contain cost. It is particularly important in the health insurance industry, due to moral hazard effect that changes patient and physician behaviors and results in overutilization of resources. 1,2 A range of utilization management procedures have been deployed by third-party-payers to prevent inappropriate admission. Pre-authorization is a widely-used technique to certify the need for hospitalization and medical care, but it contributes to administrative burden and may delay necessary services. As a result, most insurance companies in Thailand implement a pre-authorization process for non-urgent surgical procedures and high-cost diagnostic procedures only, leaving non-surgical cases to be submitted and adjudicated after the services have already been delivered. Unexpectedly rejected claims can sometimes cause confusion and frustration to patients. Due to the aforementioned reasons, concurrent review is currently a main monitoring measure performed by both insurance companies and hospitals to ensure medical necessity of resource utilization during hospitalization. The concurrent review criteria may be either referred from guidelines or determined by specific clinical attributes such as severity of illness and intensity of services. 3 Both approaches require initial screening which are normally performed by utilization review (or management) nurses (UR or UM nurses) to compare patient's clinical conditions, including both objective data (e.g. blood pressure, body temperature, laboratory and imaging results) and subjective data (e.g. chief complaint, present illness, physical examination) to a set of criteria. The data to be reviewed are from various sources such as Hospital Information System (HIS), peripheral systems, and medical records. In addition to the difficulties in complicated and time-consuming review process, human errors from data oversight have occurred frequently due to data overload. 4 Moreover, the reviewers' performance and accuracy of review results largely depend on the individual's clinical knowledge, cautiousness, and experience. An inexperienced UM nurse might not be able to prioritize work and spend time on low-value tasks, making unnecessary utilization of resources unattended and subsequently causing difficulties among the patient, provider, and payer.
The growth of health insurance industry in Thailand has more than doubled in the past 7 years. Total health insurance premium per annum has continuously increased from 43.4 billion in 2012 to 91.5 billion baht in 2019. 5 This indicates the increasing demand of human resources for both insurance companies and hospitals to handle claim processes and concurrent review. In the US, automated processing and assessment systems have been invented to reduce administrative burden, paperwork, cost, and also to support decision making process. [6][7][8] The feasibility of Natural Language Processing (NLP) for narrative medical record processing was explored as early as 1981 by Hirschman who used NLP to analyze discharge summary, implemented evaluation criteria, generated evaluation results, and compared to those from physician reviewer. 9 Currently, extensive AI-enhanced solutions have been offered by many IT companies to lessen UR efforts, such as Case Advisor Services by Optum360° and CORTEX®: The Precision Utilization Management Platform by XSOLIS. 10,11 However, those may not be suitably applied to the insurance industry in Thailand due to different language of medical record documentation and different contextual factors, especially the high proportion of inappropriate "simple diseases" admissions. 12 These diseases can normally be treated in outpatient department. Moreover, Thai Natural Language Processing has continuously evolved for decades but is not yet fully developed due to words and sentences complexity of Thai language. 13 Hence, new methods or innovative approaches should be introduced to address these problems.
Research and development on machine learning and deep learning in NLP tasks have advanced dramatically. Many progressed machine learning models were developed from the neural network and the variances of it, and each of them has different characteristics and applications. Some research has been conducted on implementing Convolutional Neural Network (CNN) in a sentence classification task and achieved excellent results, even when the training data came from many sources. 14 Moreover, there is research that implements Recurrent Convolutional Neural Network, a hybrid neural network in a sentence classification task, by integrating CNN and Long-Short Term Memory (LSTM) so that the model can capture contextual information. 15,16 As for the problem of Thai language in NLP, PyThaiNLP was developed specifically for this issue. 17 It can segment and tokenize Thai words efficiently and was selected as a tool in our task.
Although a health information entry in a structured format tremendously supports subsequent computer processing, narrative language and semi-structured data may have superior advantages of comprehensiveness, clinician's thought process, and fine structure representation. 18 As a result, Bangkok Dusit Medical Services, Plc., (BDMS), the largest private hospital network in Thailand, designed and internally developed "BDMS Utilization Review Technology (BURT)" as a decision support application to detect inappropriate hospitalization using natural language processing and a rule-based approach. The implementation of BURT will significantly reduce the assessment processing time performed by UM nurses, since it immediately captures all necessary data at their fingertips. The precision of review results is expected to be higher than those performed by inexperienced UM nurses who occasionally missed crucial data. Moreover, BURT should help expediting the claim process since it will help prioritizing cases that require attention and prompt actions, such as to improve quality of clinical information provided to payers or to clearly communicate to patients regarding unnecessary hospitalization requests.
Among several techniques of machine learning, Neural Network processing is very popular due to its broad learning abilities. As a result, various models have been applied. Convolution Neural Network (CNN) is an advanced form of neural network that can group sentences with a great variety of semantic categories and has been tested with four types of the CNN's default management model. 14 of various sentence styles 15,16 and be more efficient with different structure. 19 However, due to the complexity of the internal structure of CNN, which is caused by the application of hierarchical mathematical procedures, there are mysteries that many researchers are trying to understand and interpret.
To be able to properly provide information and increase work efficiency 20 from these successful researches, it is part of the approach and inspiration to apply CNN to Natural Language Processing (NLP).
Using the CNN in the BURT program allows us to abstract and interpret free-text data in medical records to see if a word or sentence meets certain criteria. The system has a set algorithm based on data from both NLP and rule-based approach.
Establishment of admission criteria based on current medical standard of practice
To make the tool worthwhile and suitably applicable for the Thai context, it should be able to address a common problem of unnecessary simple disease admissions. Since the definition of simple disease is not universally agreed upon and there is no gold standard of admission guideline for simple disease, we appointed an expert panel of UM physicians including 5 physicians from different specialties and different hospitals. All of them have more than 10 years of medical practice experience and over 2 years of UM physician experience. The panel studied and compiled criteria from several sources including up-to-date, guidelines from The Royal College of Physicians of Thailand, National Clinical Practice, etc. 21-26 and subsequently developed a comprehensive guideline for appropriate hospitalization. The details of 48 criteria variables were defined as A1 to F1 and can be seen in Appendix A.
Development of BDMS Utilization Review Technology (BURT) Web Application
We designed and developed BURT web application to evaluate appropriateness of hospitalization. It consists of two main components, including Prediction Model (AI and Rule-based) to analyze appropriateness of hospitalization based on admission criteria, and Display Application to present predicted result of each case.
Development of prediction model (BURT version 1.0)
We acquired data from a hospital in BDMS network where electronic medical records (EMR) have been completely implemented. Anonymized data of insured-patients admitted from January to May 2019 (1,000 cases) were included to build a prediction model. (Demographic information can be found in Appendix B). This dataset was used for the development of prediction model, both AI (Natural Language Classification Model) part and rule-based prediction part. Free-text documented dataset from History of illness, Physical examination, and from some physicians' order notes, contributed to development of Natural Language Classification Model.
From admission criteria, we defined 65 terms for NLP training, for example "cannot walk", "weakness in arms and legs" "mental status change", "swollen face", "swollen eyelid", "very fatigued" etc. for B1 criterion -General Appearance. These were sorted from the abovementioned dataset, which made altogether 9,870 sentences. Data were randomly selected into two sets of 80% (7,896 sentences) for model training and 20% (1,974 sentences) for model testing. From all text data, sentences were split into words for training with PyThaiNLP (Appendix C), which is a Python library to truncate words. 17 A word segmentation technique was deployed in both training step and testing step in our research. A window size of 7 was used (7 before and 7 after the context word). The expert team made labels of each phrase to indicate whether those phrases corresponded with desired criteria, using the binary codes of 1 for corresponded and 0 for not corresponded. Again, PythaiNLP package can facilitate transformation of words encoding into randomly vectorized numerical data for CNN training in the next step.
Labeled data and data from word tokenization was trained using CNN model, which consists of 2 layers, 3x1 convolutional layer and dense layer. For dense layer, the number of neural network nodes depended on the number and complexity of the training sentences, making the overall model consists of multiple models. The trained CNN model was implemented for each single NLP criterion and provided a predictive result from all medical sentence input. Therefore, each NLP criterion will be classified as "Met" or "Not Met". Flow chart of Natural Language Classification model of this project is shown in Figure 1. The performance using testing data as an input achieved 95% accuracy.
A rule-based module was set up with conditional values taken from vital signs, laboratory results, and computerized physician order entry (CPOE) ( Table 1). The criteria rule was based on a comprehensive guideline for appropriate hospitalization compiled by UM Physician Panel. This combined AI and rule-based algorithm was then able to evaluate appropriateness of admission and show results as "Appropriate hospitalization" or "Inappropriate hospitalization".
Fine-tuning process
Data from 500 cases, which were the subset of 1,000 cases (Appendix B), were randomly selected and comprehensively studied by the UM physician expert panel. A series of collaborative discussions among UM Physicians gearing towards final agreement for each case was made from a majority vote. The mutual goal was threefold: to improve NLP accuracy, to evaluate appropriateness of admission criteria algorithm setup, and to evaluate the effectiveness of automated application of the criteria. We aimed to make BURT capable of analyzing appropriateness of hospitalization in both simple diseases and non-simple diseases with various levels of severity. During the fine-tuning process, we increased keywords for NLP training (from 65 to 79 terms), and increased NLP training and testing dataset (from 9,870 to 77,707 sentences). The anonymized data of insured-patients admitted between December 2017 and October 2019 (22,020 cases) were included. The rule-based model was also enhanced from condition-based to scoring-based in order to classify different levels of severity. After the fine-tuning process, tremendous improvement was made from our first to the latest version of the tool, and can be described as BURT 1.0 and BURT 1.1 respectively. Examples of major improvement are explained in Appendix D.
After several consecutive cycles of NLP training and testing, we achieved great results of NLP model with significantly increasing accuracy, precision, recall, and F1-score. ( Table 2). The definition of each matrics can be seen in Figure 2. The agreement between BURT 1.1 and UM Physician Panel also increased significantly from 70% to 85% (Table 3). We intended to minimize the number of false positive (false appropriate), since it will make UM nurse inadvertently rely upon the predicted results and overlook any issues that require a necessary action. We then thoroughly reviewed all the 14 false positive (false appropriate) cases and found out that the agreements on inappropriate hospitalization among UM Physician Panel were not unanimous. Ultimately, BURT 1.1 showed very promising predicted results. However, given that advanced medical technology and treatments are continuously being evolved, we will continue to align our tool accordingly.
Display Application
The predicted results are displayed on web application. The dependent variables of appropriate hospitalization decision were categorical and have three possible results: appropriate admission, borderline inappropriate admission, and inappropriate admission. (Figure 3 and Figure 4) • Inappropriate admission (Total score = 0) means admission criteria are not satisfied. The case will be shown in red color, displayed on the top, and should be given first priority by UM nurse. • Borderline inappropriate admission (Total score = 1) means admission criteria are partially satisfied, but still require manual review by UM nurse. The case will be shown in yellow color, displayed beneath the red ones, and should be given second priority by UM nurse. • Appropriate admission (Total score ≥ 2) means admission criteria are strongly fulfilled. The case will be shown in green color and displayed at the bottom. UM nurse can simply disregard this category unless for educational purposes.
The prediction outcome and detailed information for each criterion will be displayed on the screen so that users can take suitable further action, or to notice the weakness of algorithm, or even to make recommendation and give feedback to the application administrator.
BURT1.1 performance evaluation
We evaluated the performance of BURT1.1 in assessing inappropriate hospitalization by comparing its results to those manually performed by UM nurses. Using the identical 300 cases, we compared the review results from three sources, including BURT1.1, concurrent review data collection from the participating hospital, and retrospective review report from Bangkok Hospital Headquarters. Firstly, we randomly selected 300 cases from 500 cases that had been thoroughly reviewed by UM Physician Panel and made a consensus on appropriateness of hospitalization. Secondly, we searched for the review results of those 300 cases that were manually reviewed by UM nurses in the participating hospital and concurrently reviewed during patient stay. Thirdly, we retrieved detailed clinical data of 300 cases from important sources that required by BURT1.1 algorithm and summarized them in a ready-to-review excel form, and assigned to UM nurses at Bangkok Hospital Headquarters. Three UM nurses with different levels of experience, including 1 in-charge UM nurse and 2 operational-level UM nurses were specifically selected to perform secondary analysis of retrospective data. The criteria set was clearly explained prior to the review process in order to reduce variance. Rate of agreements on detection of inappropriate hospitalization between UM Physician Panel and BURT1.1, UM Physician Panel and concurrent review report from the network hospital, and UM Physician Panel and retrospective review report from Bangkok Hospital Headquarter were then compared ( Table 4).
Development of BDMS Utilization Review Technology (BURT): An Artificial Intelligence Tool Using Thai Natural Language Processing to Assess Appropriateness of Hospitalization
Overall, BURT1.1 showed a favorable outcome with the highest rate of agreement (86.00%) among all the methods. An in-charge UM nurse and two operational-level UM nurses from Bangkok Hospital Headquarters had 79%, 77.33%, and 76.00% respectively. Network hospital showed the lowest rate of agreement (72.67%), presumably from excessive workload and a laboriously manual review process that all the scattered sources of information might easily lead to data oversight. 28 The performance evaluation results demonstrated that BURT1.1 can support utilization review process in detecting inappropriate admission and leading to an appropriate action that should be taken by a UM nurse.
BURT1.1 prediction output and insurance claim approval
We retrospectively evaluated the insurance claim adjudication decision made by insurance companies for those 300 cases ( Table 5). The claim approval apparently aligned with the BURT1.1 prediction outputs. As we prioritized precision over recall, 99% of precision represented a satisfied prediction
Time and cost -saving utilization review process
Processing time of conventional admission review that has been manually performed by UM nurses, varies between 10-15 minutes per case, depending on UM nurse experience, medical knowledge, and competency. Within 0.59 seconds, BURT1.1 can provide a prediction result output, this allows UM nurses to perform more valuable tasks, increases productivity, and reduces operating costs. For example, for the total of 300 cases in our BURT1.1 performance evaluation, UM nurses could have dismissed 206 (69%) cases, and directly focus on only 94 cases. outcome. There were three cases that BURT1.1 predicted output showed appropriate, but the claims were denied by insurance companies. We thoroughly reviewed the reason of claim denial for these three particular cases and found out that they were non-clinical issue of suspected preexisting conditions. On the other hand, in Thailand, the claim denial rate for inappropriate hospitalizations was relatively low due to the fact that various factors are usually incorporated in claim adjudication process especially special business condition that can overrule medical necessity and insurance policy details. However, BURT1.1 was not designed to predict claim approval decision and will definitely assist UM nurses in detecting medically inappropriate admission, inappropriate documentation, and patient's request for unnecessary admission. One case rejected by an insurance company due to unnecessary admission, for instance, BURT1.1 would have displayed the unmet criterion which was actually from incomplete documentation. Feedback to physician should have been provided immediately to improve medical record documentation.
Discussion
The significance of our study is that the predictive algorithm development process has been exhaustively refined by a UM physician expert panel. Additionally, the combination of Thai natural language processing and a rule-based model, which runs on web application, makes it highly applicable for other hospital settings. However, pre-required electronic medical record (EMR) and computerized physician order entry (CPOE) are unavoidable, since the tool needs electronic records for rule-based approach.
A limitation of this study is that the data is from only one hospital, so overfitting could be a problem that we would expect to encounter due to specific practice patterns and clinical cultures of each hospital. Similar to other previous studies, no clinical NLP algorithm is completely accurate and we also found the same challenges of missing information or medical narrative explaining severity of illness, not in a guideline terminology format but implicitly stated in documents. 27,29 BURT1.1 will be implemented as an automatic routine screening tool in a pilot hospital in the BDMS network. It will capture data directly from the Hospital Information System (HIS) and related peripheral systems, and perform hospitalization appropriateness evaluation. However, further work needs to be contributed by both developers and users to make the tool functioning more accurately and to fix the overfitting problem. Under certain circumstances, disagreement between BURT1.1 and human expert may occur and should be discussed with an application administrator. Potential improvement would be to refine the precision of NLP and measure sensitivity and specificity of the tool. Advanced scoring system for severity of illness, including patient's chief complaint, clinical signs, and symptoms, should be established to provide probabilistic outputs in more details. Additional thesaurus of signs and symptoms may be required for extra training. The tool should be directly integrated into BDMS e-Claim system and shared display application in order to perform real-time evaluation. To make the tool easily applicable to analyze evidence-based guideline compliance for other resource utilization, such as appropriateness of diagnostic and therapeutic services, we plan to enhance NLP tasks to be able to extract temporal information, track longitudinal progression of an illness, 30 and transform unstructured data to structured data. 31
Conclusion
Through several collaborative discussions among Utilization Management experts with available electronic health record (EHR) data, we developed an AI enhanced tool using Natural Language Processing and Rule Based algorithms, namely BDMS Utilization Review Technology (BURT) and have improved version as BURT1.1. This BURT1.1 should be effectively implemented as an automatic daily screening tool for inappropriate hospitalization, which is a first level review process of utilization management. It can immediately identify records at high risk of inappropriate hospitalization that require further manual review by UM nurse, provide feedback to attending physician on the completeness and quality of documentation, as well as notifying the case to be reviewed by UM physician. BURT1.1 will also be beneficial to UM nurses to develop their knowledge and professional judgement, since it will clearly show on the display screen how patient's data satisfies each specific criterion. Ultimately, the use of BURT1.1 would increase UM efficiency, expedite the claim process due to more complete data submissions, reduce health care costs from unnecessary hospitalization, and reduce claim denials. | 2020-10-28T18:44:45.888Z | 2020-09-25T00:00:00.000 | {
"year": 2020,
"sha1": "e8e33d7af5547912d564f56bd830f6dc71ce64fe",
"oa_license": "CCBY",
"oa_url": "https://he02.tci-thaijo.org/index.php/bkkmedj/article/download/243025/170727",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "e8651e3d386fca7a7d32f7d2165fdff68d2e4af5",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
190660659 | pes2o/s2orc | v3-fos-license | Revisiting the Myth of Irishness and Heroism—An Analysis of W.B. Yeats’ The Green Helmet
Using Sabina J. Müller’s theory of myth, this paper will analyze the deconstruction and revision of myths in The Green Helmet. Irish mythology, as described by Müller, has in its inheritance a myth of Ireland being personified in an image of a woman, a goddess of the land. This Ireland-as-woman image, as well as Irish heroism and their fighting spirit will be investigated in Yeats’ drama to show its simplification and to mock its nature. The analysis will also give an insight into the diversified and complex image of the Irish nation.
Introduction
From the eighteenth century onwards, Irish mythology entered the Anglo-Irish tradition, yet it is still preserved in Irish literary output. In The Green Helmet, W.B. Yeats, one of the leading figures of the Celtic Literary Revival and Irish Renaissance, that aimed at restoring Irish culture and language, made an attempt to demythologize two of them, heroism and Irishness, which seem to be crucial in Irish mythology. The play shows how Yeats, incorporating the Cuchulainn character, satirizes the stereotypical perception of the Irish being a deluded and drunken nation. Irish mythology has in its inheritance a myth of Ireland being personified in the image of a woman, a goddess of the land. This Ireland-as-woman image, as well as Irish heroism will be investigated in Yeats' drama to show its simplification and to mock its nature. Using Sabina J. Müller's theory of myth, this paper will analyze the deconstruction and revision of myths in The Green Helmet.
Theoretical Framework
As Müller described it, a theory of myth has to explain the function of myths and how literary works achieve specific effects by applying them. The creation of a myth can be traced back to the 12th century and one of the mythograhpers that focuses on the creation of the myth is Mircea Eliade, for whom all myths are religious, being based on the fundamental principle of the sacred and the profane. The first one can have, for Eliade, a threefold effect on the latter one: "hierophany creates a sacred space, (…) establishes cosmos, order within profane, chaotic space. (…) creates absolute reality" (Müller, 2007, p. 10). Further on, Müller states that a religious person takes part in mythical time, in illo tempore, and that sacred time is nonhistorical-people endeavor to "regain the sacred time by periodically reintegrating it into profane time by means of rites" (Müller, 2007, p. 11).
Joanna Zadarko, MA, Faculty of English, Adam Mickiewicz University in Poznań.
DA VID PUBLISHING D By definition, any theory of myth should be applicable to all kinds of them, but practice shows something contrary. For example, Freudian and Jungian theories of myth are more suitable for hero myths. They are both associated with the nature of dreams, yet while Freud relates myths to sexual wishes, Jung claims that myths abet psychological growth. Myths, being the preconscious psyche, make the monsters, gods, and heroes more archetypes, making their adventures symbolic representations of the ego's consciousness. At the same time, as Muller puts it, the Jungian and psychological approach is applicable to both primitive individuals as well as modern society.
Primitive people feel safe and protected, because they live in a world watched over by gods. (…) If modern people want to experience safety and harmony, they must discover the self and establish a connection with the unconscious. Thus myths have for primitive and modern human beings alike the function of giving meaning to life (…). (Müller, 2007, p. 24) Celtic mythology touched upon the motif of female figures as well as land-as-woman theme long before it entered the Anglo-Irish tradition. Following Rosalind Clarks's steps of its development, the image of a woman in Celtic mythology started from the goddess of fertility, further changing into the pseudo-historical woman and after the year 1000 standing for an allegorical land. The final stage for Clark is the 19th-century Ireland-as-woman motif. This image was also linked with the Irish sovereignty myth, where female figures appeared in three different dimensions: "as beautiful maidens, powerful sexual women or as an old hag" (Müller, 2007, p. 35). Later on, the aisling poems and their translation, patriotic song, and folklore, the 18th century witnessed the emergence of Cathleen Ni Houlihan.
Cathleen was present in Irish culture and literature since that time but it was Yeats who revived her figure, representing Ireland and standing for its personification as a sovereignty, earth pagan goddess and a woman who the Irish went to fight and die for. Yet, while she brought fertility, she also demanded sacrifice. She was still a goddess but there was no sovereignty-she did not have her own land, she was just striving to reclaim her four green lands.
However, the most famous hero in Irish mythology is Cuchulainn, about whom legends and myths form a part of the Ulster Cycle. He was also called the Irish Achilles (Ellis, 1987, p. 72). Having a mortal mother and a divine father, Cuchulainn is mostly famous for his single-handed defense of Ulster in the war of the Táin.
Cuchulainn was married to Emer, whose hand also required a lot of effort and struggle from him. He is also a tragic hero-he dies on a battlefield while taking revenge for the death of the king of Munster. Defeated, exhausted and dying, he ties himself up to a pillar so that he can stand with his feet on the ground and face the eyes of his enemies. He was a champion of all Ireland.
William Butler Yeats was almost as much a legend as the hero he was writing about. The founder of the National Theatre Society, the Abbey Theatre, a leading figure of the Irish Literary Revival, and a 1923 Nobel Prize winner, he was the poet and dramatist of the Irish nation. Despite the huge success of his poems, Yeats considered himself predominantly a dramatist. With his two dozen plays and the establishment the Abbey Theatre, he made Dublin the theatrical and dramatic centre of cultural Europe at the beginning of the 20th century, and also brought the Irish culture and traditions into the English-speaking world (Sternlicht, 1998, p. 46 Helmet, where Yeats decides to de-mythologize the Irishness and uncover the human vanities, giving a realistic and amusing picture of the Irish heroes and their wives.
Analysis
The Green Helmet, subtitled "An heroic farce" begins with the arrival of Cuhulainn and the appearance of the green helmet which is thrown by the Red Man, who is going to appear again towards the end of the play. The three men are boasting about their courage, fearless deeds, and heroic attitude which give all of them the right to have and wear the noble green helmet. They swell with pride, looking at the helmet in their hands. The whole scene, however, is far from heroic, rather it is down-to-earth and shows self-pride and inability to reach a compromise, which Yeats might have mirrored in the Irish nation. Overtly brave fighters and men of their nation, yet giving a false impression. Secular and human vanity almost lost them. Fortunately, in the same scene, Cuchulainn comes to save the honour of all three and fills the helmet with ale and asks the other warriors to drink with him so that all of them would be equal and no one would feel superior or inferior. They all drink from the helmet and Cuchulainn throws it into the sea.
The above scene also demonstrates the Jungian theory described by Müller. It illustrates how the presence of mythology and gods, in this case the mythical figure of the Red Man, and the importance of the green helmet, helps people find meaning in their lives. The possession of the green helmet, or any other attribute of superiority, might have a purpose in a warrior's life. Fighting over it, as in the scene above, demonstrates, however, that it was not also accompanied by heroic behavior or a noble deed.
REVISITING THE MYTH OF IRISHNESS AND HEROISM 914
Not only is heroism presented and de-mythologized in the play. Further on, we can observed how Yeats revised the Ireland-as-woman image. As Müller puts it, "the personification of Ireland as it appears in nationalist discourse and traditional literature simplifies" (Müller, 2007, p. 78). The woman is both a passive and at the same symbolic figure, with no complex feelings or ambitions, or she may be presented as it tempers with the history of the Irish people. Also bearing in mind the previously mentioned sovereignty of a woman that stands for Ireland in Celtic mythology, Yeats gives us quiet a contrary picture of Irish women.
A second amusing episode in the Green Helmet comes with a scene where the women of the three warriors-Cuchulainn, Conall, and Laegaire's wives-want to enter the building, and all three of them would like to enter first, since they feel they deserve supremacy due to their husbands' noble feats or other achievements. The three women cannot agree upon the order of their entrance. Every wife wants to enter first, feeling superior to the others. One might say that the image of Irish woman as a strong, regal, and powerful figure who stands up for the nation, at the same time denoting the allegorical green fields and Ireland itself is here shattered and diminished. Female vanity comes in its place showing how shallow, arrogant, and self-centered Irish women can be. The Ireland-as-woman image is also not simplified, nor is the image of woman, since the three ladies in Yeats' play show mere pretension and conceit. The author made them appear devoid of any mythological or divine qualities. The Irishness, woman as a symbol of the land, the personification of Ireland, is somehow prone to human error and temptations While Jung stated that myths abet psychological growth, here one can observe the clashing representation of women. Prone to vanity and with no ability to reach a compromise for the better good, these Irish women are imperfect and fail to control their ego or consciousness. Being superior is the chief aim of the three ladies. The scene, however, is once again saved by Cuchulainn who breaks the door and lets the three ladies enter simultaneously, indicating at the same time their equality.
Towards the end of the play, the Red Man emerges again and demands his debt be paid, and for one head cut off. Cuchulainn saves the situation once more and steps out, willing to be the victim. He volunteers to sacrifice himself in the name of and for the sake of his people. Nevertheless, his speech just before he wants to give himself to the Red Man, uncovers some of the blemishes of his character: what he did and he even explains to his wife that thanks to his other deeds she will be raised up in the world. The mythical figure of Cuchulainn, even though he makes the huge sacrifice of his own life, is devalued and diminished. He is a human and earthly figure driven by his sexual desires. His weaknesses are also mentioned in the very last passages of the play, in the final speech of the Red Man: "the hand that loves to scatter, the life that like a gambler's throw (…)" (The Green Helmet, line 91). Bodily, mortal and sexual motifs drove Cuchulainn, the greatest warrior in Ireland, to succumb to his own weaknesses.
Conclusion
As has been presented, Yeats managed to deconstruct the myth of both heroism, in the character of Cuchulainn and the other Irish warriors, as well as the myth of Ireland-as-woman. The three warriors showed no understanding and merely pride, wanting to posses the green helmet. Lack of humility and modesty revealed their faulty nature. The image of woman standing for the notion of Ireland and Irishness was also ridiculed by Yeats.
The three ladies present in Yeats' play were vain and self-centered to the extent that they also could not reach a solution, nor they could reach a compromise. Bereft of regal mythical virtues or female grace, they were stubborn and uncompromising to the very end. Superiority over others was more important to them. Finally, Cuchulainn himself proved to be of faulty, human nature, as well-admitting to have cheated on his wife and showing resentment towards his wife's attitude. The Green Helmet shows how the divine and regal nature of myth is deconstructed by the human vanities that Yeats uncovers throughout the play. A deluded nation, unable to reach a compromise, that is the image presented by Yeats. A hero prone to human errors and a woman clothed in vanity and stubbornness. Nevertheless, Yeats also presented Cuchulainn as a hero who is ultimately able to find solutions to problems without leaving anyone feeling inferior, and as a hero who is willing to sacrifice himself for the life of his people. Some virtues of a true warrior and a man of his nation are still there among the Irish for Yeats. The myth of Irishness and heroism was thus ridiculed and deconstructed, yet it was also revised. Perhaps, for contemporary Ireland, which is a postcolonial nation struggling in a search for identity in an English-speaking world, it is now time to revise their myths again, as a source of inspiration and cultural identity. | 2019-06-15T13:17:47.298Z | 2014-11-28T00:00:00.000 | {
"year": 2014,
"sha1": "04e036d1fefd231e4529c632729ba5e6c65a70c1",
"oa_license": "CCBYNC",
"oa_url": "http://www.davidpublisher.org/Public/uploads/Contribute/5513bc4c13924.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "71e48d4217e27cc191321d50f25cdbe4bb692428",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": [
"Art"
]
} |
108960968 | pes2o/s2orc | v3-fos-license | Weight and Size Estimation of Energy Efficient LED Ballasts
At the given time, the demand for electrical energy is growing while prospects of new electrical energy sources are quite questionable. This requires for an increase in energy efficiency that, in turn, can be achieved through increasing of self-efficiency of electrical technologies. Another way is making electrical equipment “smart” that means reasonable limitation of its operation. In particular, in the field of electrical lighting these two ways can be combined if Light Emitting Diodes (LEDs) are used [1]. On the one hand modern LEDs have efficacy of several tens lumens per watt that is comparable with high pressure sodium lamps. On the other hand it is possible to effectively adjust the light produced by LEDs with no negative impact on them. This paper estimates the efficiency of various LED ballasts in the context of optimization of their weight and size. Amount of light produced by an LED is proportional to its current. This brings forward two light control methods [2]: 1) fluent regulation of LEDs current when its value varies depending on the light request; 2) pulse mode regulation of LEDs current when it is either zero or maximum but its average value varies depending on the light request. Since the light produced by LED follows its current at a very high rate [3] the second method may lead to flickering and stroboscopic effects. One more light regulation method [2] is possible because rated power of LEDs is usually small. For this reason LED luminary usually includes a number of LEDs. Then it is possible to divide them into groups and control each group separately. This method, however, ensures lesser dimming levels and lower accuracy of regulation. Therefore, the first regulation method – fluent regulation of LED current is preferable. LED itself is a low voltage element. This mostly requires a DC/DC stage for dimming even if the LED luminary is fed from AC line. This argument is especially significant if the luminary has few LED groups that must be dimmed separately, for instance, in the case of street lighting. Various DC choppers can be used as the regulators: buck, boost, buck-boost etc [4]. All these converters are pulse mode circuits that may be driven in different ways – pulse width modulation, frequency modulation etc. The chosen topology and control method has significant impact on the efficiency of the dimmer [5]. They also have influence on its size and weight. The given paper investigates buck and boost dimmers operating in pulse width modulation mode form the point of view of weight/size and efficiency. The converters are estimated analytically and through simulation as well as tested experimentally. Then the conclusions about the optimal choice are formulated at the end.
Introduction
At the given time, the demand for electrical energy is growing while prospects of new electrical energy sources are quite questionable. This requires for an increase in energy efficiency that, in turn, can be achieved through increasing of self-efficiency of electrical technologies. Another way is making electrical equipment "smart" that means reasonable limitation of its operation. In particular, in the field of electrical lighting these two ways can be combined if Light Emitting Diodes (LEDs) are used [1]. On the one hand modern LEDs have efficacy of several tens lumens per watt that is comparable with high pressure sodium lamps. On the other hand it is possible to effectively adjust the light produced by LEDs with no negative impact on them. This paper estimates the efficiency of various LED ballasts in the context of optimization of their weight and size. Amount of light produced by an LED is proportional to its current. This brings forward two light control methods [2]: 1) fluent regulation of LEDs current -when its value varies depending on the light request; 2) pulse mode regulation of LEDs current -when it is either zero or maximum but its average value varies depending on the light request. Since the light produced by LED follows its current at a very high rate [3] the second method may lead to flickering and stroboscopic effects. One more light regulation method [2] is possible because rated power of LEDs is usually small. For this reason LED luminary usually includes a number of LEDs. Then it is possible to divide them into groups and control each group separately. This method, however, ensures lesser dimming levels and lower accuracy of regulation. Therefore, the first regulation method -fluent regulation of LED current is preferable.
LED itself is a low voltage element. This mostly requires a DC/DC stage for dimming even if the LED luminary is fed from AC line. This argument is especially significant if the luminary has few LED groups that must be dimmed separately, for instance, in the case of street lighting. Various DC choppers can be used as the regulators: buck, boost, buck-boost etc [4]. All these converters are pulse mode circuits that may be driven in different ways -pulse width modulation, frequency modulation etc. The chosen topology and control method has significant impact on the efficiency of the dimmer [5]. They also have influence on its size and weight.
The given paper investigates buck and boost dimmers operating in pulse width modulation mode form the point of view of weight/size and efficiency. The converters are estimated analytically and through simulation as well as tested experimentally. Then the conclusions about the optimal choice are formulated at the end.
General considerations
Weight and size of any electronic converter depend on those of its elements. However, from this point of view some elements are dominating over the others and their contribution has to be taken into account first. The most significant components of one-switch DC/DC dimmers are inductance coil, power diode and power transistor together with their heatsink (whose size depends on the power losses) and driver. Previous experience shows that they take up to 50% of the total volume and up to 40% of the total weight. That is why this paper is focused on the estimation of these elements.
In this research buck and boost topologies of the dimmers has been investigated due to their potentially better control performance.
Schematics of experimental setup for buck converter is shown in Fig. 1-a, but for boost -in Fig. 1-b. All elements of the testbench (VT1 -IRF540 MOSFET, VD1 -ultrafast diode MUR860, and the load containing seven series connected LEDs W724C0 made by Seoul Semiconductor) were the same during all experiments to ensure that difference of measured values between tests depends only on inductor parameter changes. Values of inductance coil L SM were changed during experiments. Several values of the switching frequency have been applied as well. The output power has been used as an argument for output curves.
Control signal have been obtained from a function generator G1 and fed to VT1 transistor through HCPL-J312 driver. To carry out measurements four Extech EX430 multimeters were used with 0.3% basic accuracy.
Estimated influence of parameters
Expected influence of the dimmer type. If the choice of input voltage for dimmer is not limited then the dimmer can also be quite arbitrarily chosen. In this case the impact of topology of the dimmer on its weight/size may be the main criterion for its choice. At the same time it must be noted that the influence of the topology is not direct.
The utilized diodes W724C have operating voltage of 2.5…3.6V that corresponds to operating current 0…2.8A. Therefore seven such diodes require from 17.5 to 25V for full range of current regulation. Such voltage can be obtained from a buck, boost or buck/boost converter. The last one does not provide good performance from the point of view of control and is not discussed here. The buck dimmer operates better (from the same point of view) at 25V input and requires 70…100% of duty cycle in this case. Similarly the boost dimmer must have 17.5V on its input and 0…30% of the duty cycle. Therefore, the first dimmer works with higher on-state losses in the switch while the second one -in the diode.
The expressions required for calculation of power losses in buck and boost dimmers (Table 1) are based on the power equilibrium for input voltage V IN and output current I O . It is also assumed that the inductance of the coil is infinite -i.e. the switch and diode conducts pulse mode current. Nonlinearity of LED load is represented as no consumption at voltage less 17.5V and linear load 0…3A corresponding to voltage of 17.5…25 V Table 1 it becomes possible to calculate the power losses and present them graphically (Fig. 2). This picture demonstrates that, as it has been previously noted, in the buck converter on-state losses of the transistor dominates at high duty cycles, while in the buck -diode losses are more significant. Besides that it is obvious that the on-state losses of the boost converter are higher in absolute value (mostly due to diode losses). Indeed, if power of converters is the same then the boost converter has much higher input (coil) current that leads to higher losses in semiconductor switches.
The configuration of the dimmer has impact on its switching losses too. The technology of calculation of these losses is given in [6]. In slightly simplified version (for the worst case analysis) it is represented by the following formulas (see Fig. 3 for details): where voltage rise and fall times are found as: GSload DR Other parameters are either known as the initial conditions (V DS -is operation voltage of the switch, I Dton and I Dtoff -commutated current at turn-on and turn-off transients respectively, f SW -switching frequency, R Gvalue of gate resistor) or found from the datasheets (Q rrreverse recovery charge of the utilized diode, t Irise and t Ifall -drain current rise and fall time, C GD -gate to drain capacitance, V GSload -gate voltage at the drain equal to load).
(2)-(8) provide a basis for switching loss calculation. However, distribution of the losses across the operation range depends on the dimmer. In the case of buck converter V DS =V IN is a constant, but I D =I O depends on the duty cycle as expressed in (1). Then the switching losses of the transistor rise linearly with current ( Fig. 4-a).
In the boost converter V DS =V O , hence the losses are also a function of the duty cycle ( (1). Therefore, in this converter transistor commutates the current that, due to power equilibrium, undergoes doubled 1/(1-D) effect. This leads to more strong effect of D on the switching losses (Fig. 4).
Expected influence of modulation frequency.
Switching losses in the diode are defined mostly by its recovery process. In the bust converter they depend on the duty cycle D, but are still small compared with those of transistor and, especially with its on-state losses.
The impact of the switching frequency f SW on the commutation losses is linear and is expressed by (2) and (3). On the other hand from Table 1-A3…B4 it is seen that this frequency has no effect on the conduction losses. If the thermal parameters of the transistor and diode are known then it is possible to determine the maximal power losses and maximal frequency of operation. For instance for no heatsink situation transistor losses are P VTmax =(175-25)/62=2.4W but diode losses P VDmax =(175-25)/75=2W. Then the maximal switching losses are 2.4-0.4=2W for the transistor, but for the diode 2-0.27=1.73W. From where and from (2)…(3) maximal frequency of the diode is 1.72W/1219nJ=1.41MHz, but this of the transistor -2W/8370nJ=0.24Mhz. There switching energy have been found previously utilizing (4)…(6).
On the other hand increasing the frequency decreases the value of reactive components linearly while their physical volume has square-root dependence.
Expected influence of inductance. The inductance of a coil has direct impact on its volume expressed with proportionality coefficient A L that ties the inductance of the coil and number of its turns in power 2. Therefore if the coil utilizes the available wire window well it is possible to say that the volume of the coil proportional to the squareroot of its inductance.
At the same time, smaller inductance leads to higher current pulsations in the coil and, hence, in the transistor and diode. Therefore, rms current of the transistor must be higher at lower inductance. The corresponding dependence may be presented in a simplified form as following However (9) shows that this dependence is quite weak and can mostly be ignored.
Development of model and simulation
Initial evaluation of the DC/DC buck and boost dimmer has been made through PSpice simulation. For the most of the elements a compromise between the complicity of the model and tolerance has been achieved: power diode and MOSFET are simulated as inherent PSpice models while models of the coil and LED load utilize macrocircuits based on a datasheet parameters of the elements. At the same time input voltage source, control source and driver are simulated as ideal elements.
The simulation results of the buck and boost converters are presented in this subsection. The switching frequencies through the simulation have been set to 40 kHz, 80kHz and 120kHz, but values in inductance of the coil -to 317uH, 417uH, 512uH, 610uH and the 761uH.
The comparison of efficiency is presented in Fig. 5 and Fig. 6. Efficiency at fixed value of L SM and different values of frequency for buck converter is given in Fig. 5-a, but for boost -in Fig. 6-a. Efficiency at fixed value of f SW different values of inductance for buck converter is given in Fig. 5-b, but for boost -in Fig. 6 As can be seen from the graphs, the higher switching frequency reduces the efficiency. The highest efficiency can be observed with frequency 40 kHz. According to the presented figures the influence of the inductor is quite weak. This, however, may be a result of inaccuracy of its model. A more detailed model of the inductor could reveal decreased efficiency.
Simulation results show that buck converter switching topology is a good platform for a high efficiency LED drive system, because it provides higher efficiency than the boost converter.
Experimental evaluation of the dimmers
Buck converter. Efficiency of the buck converter has been evaluated with three switching frequencies -40, 80 and 120kHz (Fig. 7). It can be seen that the overall efficiency of the converter decreases with frequency growth. The reason is increasing switching losses in semiconductor switches, inductor core losses and conductor skin effect.
One more series of experiments has been conducted with different value of the inductance coil (Fig. 8). The tendency found in Fig. 8 is the same for different frequencies -efficiency curve becomes more linear at bigger inductance. Efficiency increases at smaller output powers with bigger inductance. Smaller inductances provide better performance at higher output powers because of its smaller active.
Three different cores (T94-26, T106-26 and T130-26) were used to evaluate influence the core size of a coil on its performance (Fig. 9). Fig. 9 is seen that bigger inductor core size is better at lower output powers. At the same time at higher output power smaller core is preferable because of smaller active resistance of wires (Table 3).
Skin effect has been evaluated on the next stage (Fig. 10). This effect appears at higher frequencies and reduce effectively used conductor cross sectional area. To reduce conductor skin effect several parallel wires of the smaller diameter can be used (Table 4). Boost converter. The analysis of experiments for boost converter shows that at higher frequency losses increases, especially at higher output power (Fig. 11). At the same time increase of inductance (Fig. 12) causes increase of losses at higher output powers. This can be explained by growth of the inductor active resistance (Table 5). Fig. 13 shows that there no significant impact of the core size.
Conclusions
The most important conclusion is that the efficiency of the discussed converters and their weight /size is much related. Losses of the semiconductor elements define the size of their heatsinks. The inductor itself has a strong contribution in the overall size of the converter.
The presented data proves that buck dimmers may be more compact because of their better overall efficiency, especially at higher output power. This can be explained by longer operation time of the transistor and shorter -of the diode. This leads to smaller conduction losses in the diode and higher but acceptable in the transistor.
Using lower frequency of operation reduces losses of semiconductor elements but requires bigger inductor and vice versa. Some compromise can be found if switches with reasonable heatsink operate at the highest heat transfer level.
Inductor has a contradictory effect on the size of converter. Bigger inductor allows reduction of frequency and losses but it is bulky itself. It must also be noted that converter efficiency at higher output power can be improved by using smaller cores that reduces resistance of the winding.
It must be specially emphasized that the impact of wire resistance of the inductor is significant. Therefore, effective (including skin and proximity effects) crosssectional area of the wires must be kept high enough but their length -short enough. For smaller inductances this can be achieved by using smaller inductor cores. | 2019-04-12T13:57:05.236Z | 2012-03-04T00:00:00.000 | {
"year": 2012,
"sha1": "016027fdcef32e7e7cd7dc846a068bc521a9a808",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5755/j01.eee.120.4.1453",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "016027fdcef32e7e7cd7dc846a068bc521a9a808",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Physics"
]
} |
3467273 | pes2o/s2orc | v3-fos-license | Rational and Irrational Approaches to Convince a Protein to Crystallize
The importance of structural biology has been highlighted in the past few years not only as part of drug discovery programs in the pharmaceutical industry but also by structural genomics programs. Mutations of human proteins have been long recognized as the source of severe diseases and a structural knowledge of the consequences of a mutation might open up new approaches of drugs and cure. Although the function of a protein can be studied by several biochemical and/or biophysical techniques, a detailed molecular understanding of the protein of interest can only be obtained by combining functional data with the knowledge of the three-dimensional structure. In principle three techniques exist to determine a protein structure, namely X-ray crystallography, nuclear magnetic resonance spectroscopy (NMR) and electron microscopy (EM). According to the protein data bank (pdb; http://www.rcsb.org) that provides a general and open-access platform for structures of biomolecules, X-ray crystallography contributes more than 90% of all structures in the pdb, a clear emphasis of the importance of this technique. To perform X-ray crystallography it is essential to have large amounts of pure and homogenous protein to perform an even today still “trail and error”-based screening matrix to obtain well diffracting protein crystals. Therefore, successful protein crystallization requires three major and crucial steps, all of them associate with specific problems and challenges that need to be overcome and solved. These steps are (I) protein expression, (II) protein purification and (III) the empirical search for crystallization conditions. As summarized in Figure 1, every single step needs to be optimized along the long and stoney road to obtain protein crystals suitable for structure determination of your “most-beloved” protein via X-ray crystallography. This chapter will focus on these three steps and suggests strategies how to perform and optimize each of these three steps on the road of protein structure determination.
Introduction
The importance of structural biology has been highlighted in the past few years not only as part of drug discovery programs in the pharmaceutical industry but also by structural genomics programs.Mutations of human proteins have been long recognized as the source of severe diseases and a structural knowledge of the consequences of a mutation might open up new approaches of drugs and cure.Although the function of a protein can be studied by several biochemical and/or biophysical techniques, a detailed molecular understanding of the protein of interest can only be obtained by combining functional data with the knowledge of the three-dimensional structure.In principle three techniques exist to determine a protein structure, namely X-ray crystallography, nuclear magnetic resonance spectroscopy (NMR) and electron microscopy (EM).According to the protein data bank (pdb; http://www.rcsb.org)that provides a general and open-access platform for structures of biomolecules, X-ray crystallography contributes more than 90% of all structures in the pdb, a clear emphasis of the importance of this technique.To perform X-ray crystallography it is essential to have large amounts of pure and homogenous protein to perform an even today still "trail and error"-based screening matrix to obtain well diffracting protein crystals.Therefore, successful protein crystallization requires three major and crucial steps, all of them associate with specific problems and challenges that need to be overcome and solved.These steps are (I) protein expression, (II) protein purification and (III) the empirical search for crystallization conditions.As summarized in Figure 1, every single step needs to be optimized along the long and stoney road to obtain protein crystals suitable for structure determination of your "most-beloved" protein via X-ray crystallography.This chapter will focus on these three steps and suggests strategies how to perform and optimize each of these three steps on the road of protein structure determination.
Protein expression (I)
To crystallize a protein, the first requirement is the expression of your protein in high amounts and most importantly on a regular basis.This implies that it is possible to obtain a freshly purified protein at least weekly.In general, it is possible to express a protein either homologously or heterologously (see Figure 1 -(I) expression).Especially for large proteins, (PDB:3RYA) (Berntsson, Doeven et al. 2009) and the multidrug binding transcriptional regulator LmrR (PDB:3F8B) (Madoori, Agustiandari et al. 2009).
Eukaryotic expression hosts
The great benefit of choosing a eukaryotic host for overexpression of a protein of interest are the availability of a posttranslational modification system as well as the frequently enhanced protein folding (Midgett and Madden 2007).Eukaryotic proteins tend to misfold or lack biological activity when expressed in prokaryotic expression systems such as E. coli (Cregg, Cereghino et al. 2000;Midgett and Madden 2007).To overexpress these proteins, different yeast strains, insect cells or even mammalian cell lines have been developed as expression hosts (Figure 1 -(I) expression, right side).Eukaryotic expression systems are often more expensive, provide low expression levels and are sometimes hard to handle, when compared to bacterial systems.However, the genetic and cellular contexts are more similar to the original protein-expressing organism (Midgett and Madden 2007).In the following sections, some of the commonly employed eukaryotic expression systems will be described.
Yeast expression systems -Saccharomyces cerevisiae and Pichia pastoris
The most widely used yeast strains to express protein are Saccharomyces cerevisiae and Pichia pastoris, which offer the major advantage of a posttranslational modification system for glycosylation, proteolytic processing as well as disulfide bond formation, which for some proteins are essential for the function and/or correct folding (Cregg, Cereghino et al. 2000;Midgett and Madden 2007).The handling of yeast expression systems is similar to prokaryotic systems with respect to the genetic background and cultivation.Similar to the bacterial vector systems, expression in yeast starts with a plasmid-based cloning part which can be performed in E. coli (Cregg 2007).Afterwards the expression cassette gets integrated into the genome by simple homologous recombination in the yeast.One major advantage in P. pastoris is the insertion of multiple copies of the protein DNA-sequence into genomic DNA, which increases expression yield.The biggest advantage of yeast as expression system is that well established protocols for fermentation are available.Optimal fermentation of P. pastoris can end up with more than 130 gram of cells per liter of culture.Even if expression levels in the cell are not that high the mass of cells easily compensates for this disadvantage (Wegner 1990;Cregg, Cereghino et al. 2000;Hunt 2005;Cregg 2007;Midgett and Madden 2007).Examples of crystal structures from proteins expressed in P. pastoris are a human monoamine oxidase B (PDB:3PO7) (Binda, Aldeco et al. 2010) and a protein involved in cell adhesion NCAM2 IG3-4 (PDB:2XY1) (Kulahin, Kristensen et al. 2011).
Insect cells
The expression system in insect cells is beside yeast a well-characterised alternative to express eukaryotic proteins (Midgett and Madden 2007).As insect cells are higher eukaryotic systems their posttranslational modification machinery can carry out more complex alterations than yeast strains.They also have a machinery for the folding of mammalian proteins.The most commonly used vector system for recombinant protein expression in insect cells is baculovirus, which can also be used for gene transfer and expression in mammalian cells (Smith, Summers et al. 1983;D., L.K. et al. 1992;Altmann, Staudacher et al. 1999).A few examples of proteins expressed in insect cells that resulted in crystal structures are the transferase Ack1 (PDB:3EQP) (Kopecky, Hao et al. 2008), a human hydrolase (PDB: 2PMS) (Senkovich, Cook et al. 2007) and myosin VI (PDB:2BKI) (Menetrey, Bahloul et al. 2005).
Mammalian cell lines
The expression of proteins in mammalian cell lines is the most expensive and complex alternative.Especially for human membrane proteins this expression system has been proven to express the most active protein (Tate, Haase et al. 2003;Lundstrom 2006;Lundstrom, Wagner et al. 2006;Eifler, Duckely et al. 2007).The resulting protein amount, however, obtained from mammalian cell lines is mostly only sufficient for functional studies.Using mammalian cells lines is the most challenging variant of protein overexpression and therefore only choosen if any of the other expression system described failed.Some examples of protein structures expressed in mammalian cell lines are the hydrolase PCSK9 (PDB:2QTW) (Hampton, Knuth et al. 2007) and the acetylcholine receptor AChBP (PDB:2BYQ) (Hansen, Sulzenbacher et al. 2006).Table 1 sums up advantages and disadvantages of the above mentioned overexpression systems used for protein crystallography.
Purification
After having expressed your protein of interest, the race for crystals is by no means finished.The next step on the long road to structure determination is to isolate the protein orphrasing it differently -to remove all other proteins present in the cell (Figure 1 -(II) purification).An elegant method to do so is the genetic attachment of an affinity tag on either site of the protein or in some cases on both sides (Waugh 2005).This affinity tag has the possibility to bind high affine to a immobilized ligand on a matrix, while all other proteins have a much more reduced binding affinity and therefore flow through the matrix (Figure 1 -(II) purification 1 st step).This allows a one-step purification, which in almost all cases is relatively harmless for the protein and likely does not interfere with folding and/or overall structure of the protein.There are a lot of affinity tags available as well as matrix materials (Terpe 2003).The well known and most often used affinity tag is the poly-histidine tag (Porath, Carlsson et al. 1975;Gaberc-Porekar and Menart 2001), which can vary in length as well as in position but the overall purification strategy is the same.From all the structures solved nowadays, almost 60 % of the proteins are purified via a histidine tag; mainly due to the great purification efficiency, which can be as large as 90% after a single purification step (Gaberc-Porekar and Menart 2001;Arnau, Lauritzen et al. 2006).Therefore, most commercially available expression systems and methods contain a his-tag encoded on the plasmid.Besides the his-tag, there are other tags avaible and used for protein purification, of which the Strep-, CBP-, GST-, MBP-tag are described below.
Choice of the right tag 3.1.1 Polyhistidine-tag (his-tag)
As mentioned above the polyhistidine-tag is the most common affinity tag and the required affinity resins and chemicals are relatively inexpensive.The purification step is a so-called immobilized metal ion affinity chromatography (IMAC) (Porath, Carlsson et al. 1975).Here, a matrix is able to bind bivalent metal ions.For example nitrilotriactetic acid (NTA), which is a chelator and binds metal ions like Ni 2+ , Zn 2+ , Co 2+ or Cu 2+ (Hochuli, Dobeli et al. 1987).These metal ions have a high affinity to the imidazole group of the amino acid histidine.A stretch of histindines in a row with for example an E. coli protein is very unusual.Thus, the genetical introduction of several, in most cases 6-10 histidines in a row selects for specific binding of this protein.As eluant very elegantly imidazole can be used, which competes with the histidine tag and elutes the protein of interest.When used in low concentrations, imidazole can also be used to remove unspecifically bound proteins, which bind with low affinity to the matrix (Hefti, Van Vugt-Van der Toorn et al. 2001).Normally, a protein with a 6-10 histidine tag should be bound to the matrix relative strongly and 100-250 mM imidazole in the buffer is required to elute the protein from the resin.In contrast, proteins with a low affinity to the matrix can already be eluted with 10-50 mM imidazole (the "impurities" of E. coli).Therefore, a linear imidazol gradient, for example, separates the protein of interest and impurities (Hochuli, Dobeli et al. 1987;Gaberc-Porekar and Menart 2001).Although the polyhistidine-tag is the most common and mostly an efficient variant, there are a few applications where the his-tag can cause problems.Metalloproteins can interact either directly with the his-tag or with the ions immobilized on the matrix.In comparison to some other affinity-tags, the specificity of the his-tag is not that high and in some cases this results in the co-purification of other proteins (Waugh 2005).
Strep-tag
In comparison to the his-tag, which binds to immobilized metal ions, the strep-tag II constists of a small octapeptide (WSHPQFEK), which binds to the protein streptavidin (Schmidt, Koepke et al. 1996).The commercial available matrix is a streptavidin variant and is called Strep-Tactin.This variant is able to bind the Strep-tag II octapeptide under mild buffer conditions and can be gently eluted with biotin derivates such as desthiobiotin (Schmidt, Koepke et al. 1996;Voss and Skerra 1997).Especially for metal-ion containing enzymes it is a promising alternative to the his-tag (Groß, Pisa et al. 2002).However, as chemicals are more expensive and the matrix has a lower binding capacity, compared to NTA resins, it is often not the first option choosen.Moreover, it cannot be used under denaturating conditions since Strep-Tactin denatures and will not bind the tag anymore (Terpe 2003;Waugh 2005).Examples of proteins crystallized after a Strep-tag purification are OpuBC (PDB:3R6U) (Pittelkow, Tschapek et al. 2011) and AfProX (PDB:3MAM) (Tschapek, Pittelkow et al. 2011) as well as the sodium dependent glycine betain transporter BetP from Corynebacterium glutamicum (PDB:2WIT) (Ressl, Terwisscha van Scheltinga et al. 2009).
CBP-tag
Another peptide tag, is the calmodulin binding peptide, first described in 1992 (Stofko-Hahn, Carr et al. 1992).This peptide is prolonged compared to the Strep-tag II, consisting of 26 amino acids and binds with nanomolar affinity to calmodulin in the presence of Ca 2+ (Blumenthal, Takio et al. 1985).It is derived from the C-terminus of the skeletal-muscle myosin light-chain kinase, which makes the system an excellent choice for proteins expressed using a prokaryotic expression system, since in prokaryotic systems nearly no protein interacts with calmodulin.This allows extensive washing to remove impurities and elution with EGTA, which complexes specifically Ca 2+ , and a protein recovery around 90 % can be achieved (Terpe 2003).A drawback of this tag however is that the CBP tag can only be fused to the C-terminus of the protein since it has been shown that CBP on the Nterminus negatively influences the translation and thereby the expression rate (Zheng, Simcox et al. 1997).
GST-tag
With respect to the length of the tags, the his-tag contains only a few amino acids, the Streptag II and the CBP-tag already contain 8 -26 amino acids, but it is possible to fuse whole proteins with 26 -40 kDa to a recombinant protein.Here, the high affinity binding of the protein to their substrate is used to purify the protein of interest (Smith and Johnson 1988).In the case of the glutathione S-transferase (GST, 26 kDa) the protein specifically binds to immobilized glutathione.To elute the fusion protein from the resin, non-denaturating buffer conditions employing reduced glutathione are used (Terpe 2003).The tag can help to protect the recombinant protein from degradation by cellular proteases.It is recommended to cleave off the GST-tag after purification with a specific protease like thrombin or TEV (Tobacco Etch Virus) protease (Terpe 2003).
MBP-tag
Another affnitiy tag, which can be fused to the protein of interest, is the maltose binding protein (MBP) from E. coli.This protein has a molecular weight of 40 kDa and has the ability to bind to a cross-linked amylose matrix.The binding affinity is in the micro molar range and the tag can be used in a pH range from 7.0 -8.0, however, denaturating buffer conditions are not possible (di Guan, Li et al. 1988).The elution of the recombinant protein is recommended with 10 mM maltose.A great opportunity of the MBP-tag is the increasing solubility effect of the recombinant protein in prokaryotic expression systems and even more pronounced in eukaryotic systems (Sachdev and Chirgwin 1999).Like the CBP-tag, a fusion at the N-terminal side might influence translation and expression rates (Sachdev and Chirgwin 1999).
Tag position and double tags
As described above, the position of the tag either at the N-or C-terminus has a considerable influence on translation and expression rate as well as on the biological function (Arnau, Lauritzen et al. 2006).If information regarding activity of the protein is already available especially about the location of interaction sites, this should be included in the protein design, meaning tag position etc.In general, the tag should be placed at the position of the protein, which is less important for interactions and/or expression.To minimize the influence of the tag on folding and/or activity in some cases it helps to create a linker region of a few amino acids between the tag and the protein (Gingras, Aebersold et al. 2005).A very efficient and sophisticated solution is, the addition of amino acids between tag and protein of interest, which functions not only as an accessibility increasing factor, but, also encodes for a recognition site for proteases like thrombin or TEV.Due to this arrangement the tag -protein interaction is minimized and the tag can be cleaved off if necessary (Arnau, Lauritzen et al. 2006).In some special cases a combination of two affinity tags results in enhanced solubility and more efficient purification.To enhance the purity of a protein, often a construct of two different short affinity tags like his-tag and Strep-tag or CBP-tag can be engineered (Rubio, Shen et al. 2005).Also a combination of two his-tag or two strep-tag kept apart by a linker region enhances the binding affinity extremely.This allows more stringent washing steps prior to elution of the protein (Fischer, Leech et al. 2011).
Size exclusion chromatography and ion exchange chromatography
Despite the usage of affinity tags a second purification step is sometimes required (Figure 1 -(II) purification).Which kind of purification procedure is required depends on the nature of impurities.If these impurities differ in molar mass compared to the protein of interest, a method based on size separation can be applied.Size exclusion chromatography (SEC) also separates different oligomeric species of the protein from each other, which otherwise would strongly inhibit crystallization and also allows analysis of stability and monodispersity of the protein (Regnier 1983a;Regnier 1983b).However, in many cases, SEC is not sufficient to remove all impurities.Then separation by overall charge of the protein might be an option.Depending on the isoelectric point of the protein either anion or cation exchange chromatography can be performed.The protein binds to a matrix under very low ionic strength and is eluted afterwards either by increasing the ionic strength or by pH variation.Similar results can be achieved by hydrophobic interaction chromatography.Here, proteins with different surface properties show differences in their binding strength and binding of the protein is done inversely as during ion exchange chromatography.High ionic strength favors protein binding to a hydrophobic matrix and elution takes place when reducing the ionic strength.Although there are many other possibilities to increase the purity of a protein, the above mentioned techniques are without any doubt the most widely used and general applicable methods.
How to get a homogenous protein solution
In some cases isolated proteins are stable and homogenous at high concentrations after the purification and can be directly used for crystallization experiments.Often, however, the protein does not behave ideal and precipitates at high concentrations or forms aggregates or inhomogenous, oligomeric species; all of them prohibit crystal growth.SEC is a very elegant method to visualize the stability and oligomeric state of a protein.If the stability or the homogeneity of a protein sample is critical, you need to adapt your purification protocol and search for an optimized procedure.Different approaches are summarized below, for example a buffer screen to enhance protein solubility, multi-angle light scattering experiments to determine the absolute mass and the oligomeric state of the protein sample or fluorescence-based experiments to investigate the stability of the protein of interest.
Purified proteins -An in vitro system
After a protein is expressed in a soluble form, the subsequent purification procedure changes the environment of the protein dramatically.The cytoplasm of the cells, where the overexpression takes place, is packed with macromolecules.In E. coli, for example, the concentrations of proteins, RNAs and DNAs are about 320 mg/mL, 120 mg/mL, and 18 mg/mL, respectively (Cayley, Lewis et al. 1991;Zimmerman and Trach 1991;Elowitz, Surette et al. 1999) resulting in an overall concentration of macromolecules of above 450 mg/mL.During cell lysis and the first purification step, likely an IMAC (see above), the protein is separated from almost all other cell components.This rigorous procedure is accompanied with a severe change of the environment into an in vitro system.As a result proteins often tend to aggregate, precipitate or form inhomogeneous oligomeric states that prevent the formation of crystals in further experiments.Therefore one of the biggest challenges in structural studies is the preparation of protein solutions with high concentrations (as a rule of thumb 10-20 mg/mL) in a homogenous state.To fulfill these requirements, the in vitro system needs to be optimized with respect to different parameters as highlighted in Figure 1 -(II) purification.If a sufficient protein sample cannot be obtained, different strategies are available to increase the important characteristics of the protein: purity and homogeneity.As mentioned above, the usage of different metal ions during IMAC, ion exchange, a second affinity chromatography etc. can be sufficient to enhance purity.This might also lead to an increased stability.However, if the stability and/or homogeneity of a protein is still a problem, screening for a new buffer composition is essential to succeed during crystallization trials.
Buffer composition
Many examples illustrate the importance of an adequate buffer composition for protein stability, homogeneity, conformation, and activity (Urh, York et al. 1995;Holm and Hansen 2001;Jancarik, Pufan et al. 2004;Collins, Stevens et al. 2005).Some buffers are very frequently used and recommended by manufactures (see for example Qiagen, Roche, New England BioLabs, Fermentas, etc.).All of them contain a buffer reagent that keeps the pH constant in a well-defined range.Well-known examples are phosphate, tris (hydroxymethyl) aminomethane (Tris), or HEPES (4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid) that buffer at the physiological relevant pH range of 6-9 (Durst and Staples 1972;Chagnon and Corbeil 1973;Tornquist, Paallysaho et al. 1995).In recent years, the development of other buffer systems has been quite successful (Taha 2005) (for a list of buffers and corresponding pH ranges, see for example: http://delloyd.50megs.com/moreinfo/buffers2.html).Next to the well-defined pH, the stability and homogeneity of proteins depend on many other parameters, for example ionic strength, the presence of ligands and/or co-factors, divalent ions, glycerol, etc.The appropriate buffer composition cannot be predicted so far and needs to be identified by trial-and-error approaches.
Protein purification -How to overcome problems
In this part we would like to present some pitfalls that might occur during protein purification and provide some ‚rationales' to overcome these problems.As usual, the crucial step of solving a problem is its identification.Here, we are trying to sensitize the reader to indications, which might point towards problems related to instability and/or inhomogeneity of the protein sample.Moreover, such problems cannot always be recognized without the adequate technique(s).Therefore, we are introducing techniques that are capable to visualize the state of proteins.
Visible protein precipitations during IMAC
A very obvious stability problem is the formation of precipitations in the elution fractions of a chromatography step (see Figure 2).In this example, the his-tagged protein was eluted with a linear imidazole gradient from 10 to 500 mM imidazole and eluted at about 250 mM imidazole.Protein precipitation occured immediately after elution (Figure 2A and B) and continued (Figure 2C) resulting in a low amount of soluble protein.This aggregation can be reduced by dilution with a IMAC buffer (typically lacking imidazole) immediatly after the elution.Thereby, dilution hinders the concentration-dependent aggregation.In many cases, this rational is not sufficient to prevent precipitation.After applying, for example, a buffer screen (see Figure 1 -(II) purification) the new defined buffer is used for the chromatography or the eluting protein is diluted into the new buffer (see Figure 2D).Other elution strategies of his-tagged proteins from an IMAC column are available.As described before, competing the poly-histidine from the IMAC column by imidazole is the most common elution strategy, however, for some proteins other strategies are superior, for example, replacing imidazole by histidine.Imidazole is only a mimic for histidine.If one uses histidine instead of imidazole aggregation can be avoided as concentration of the eluent can be reduced by a factor of ten.An example for a protein sensitive to imidazole concentration is shown in Figure 3B.Here a comparative SEC chromatogram is shown.After elution from the IMAC column with imidazole only a very small amount of the protein elutes at the volume corresponding to the size of a monomer or the dimer, respectively (Figure 3B, continuous line).Most of the protein passes the column very fast and elutes at the void volume indicating large radii meaning aggregated protein.Yields of dimeric (at about 150 mL) and monomeric (at about 180 mL) proteins are strongly increased after an elution with histidine (dashed line) compared to an elution with imidazole (continous line) and only the monomeric species could be crystallized (data not shown).The choice of the eluent in IMAC might therefore be an important step in a purification protocol.Another elution strategy of his-tagged proteins is a pH change from 8 to 4. In an acidic environment, histidines become positively charged and are therefore released from the column matrix.This strategy results in a sharp elution from the matrix and the protein is eluted highly concentrated.Although this strategy is recommended by the manufacturers (see GE Healthcare, Qiagen, etc.) the desired protein needs to retain activity at acidic pHs.The bivalent metal ions (Ni 2+ , Co 2+ , Zn 2+ ,…, see above) that complex the his-tag can be removed from the matrix by chelating reagents as ethylenediaminetetraacetic acid (EDTA) as another elution strategy (Muller, Arndt et al. 1998) The elution fractions were immediately mixed in a 1 to 1 ratio with a buffer that enhances the protein stability (50 mM citrate, 50 mM LiCl, pH 6.00) evaluated during a solubility screen.
Invisible aggregations
Sometimes aggregation of proteins in solution can not be detected directly by eye.This inhomogeneity of protein samples can be visualized SEC, a method that separates proteins by their hydrodynamical radius (see above).Protein aggregates are eluting at the void volume, since they are clumbed together resulting in a big hydrodynamical radius (see Figure 3A and B).If invisible aggregation is detected the buffer composition needs to be adjusted.In one case we applied this technique to visualize the state of a protein after an IMAC, and the resulting elution profile is shown in Figure 3A (continous line).Comparable to the imidazole-induced precipitation described above, the protein aggregated and elutes within the void volume of the column (about 40 mL).Moreover, several other protein species elute from 55 to 80 mL indicating a highly inhomogeneous protein sample.The running buffer of the SEC was 50 mM Tris-HCl, pH 8.0 and 150 mM NaCl.Remarkably, a simple change to a new buffer (20 mM HEPES, 150 mM NaCl, pH 7.0) Resulted in a stable and homogenous protein sample (Figure 3A, dotted line), which was suitable for crystallization trials.Next to the rigorous change in the homogeneity of the protein, the biological activity of the protein could only be determined in the new buffer system.The influence of the buffer composition for the protein activity is a well-known phenomenon (Urh, York et al. 1995;Holm and Hansen 2001;Zaitseva, Jenewein et al. 2005) and in many cases the activity goes hand in hand with an optimal buffer for the purification.Mentionable, the new buffer was not found by trial-and-error approaches.We searched for literature dealing with homologous proteins, especially for established purification protocols.This literature search revealed the new buffer, illustrating that not every step towards a protein structure determination must be a trial-and-error process.Another example for the influence of the buffer composition was published bei Mavaro et al. (Mavaro, Abts et al. 2011).Instead of the buffer agent, the ionic strength of the buffer was the crucial determinant.Purification of the protein in low-salt buffer resulted in an inhomogenous protein sample containing a mixture of aggregates, dimers and monomers without biological activity.However, a simple change to high-salt buffer allowed the purification of homogenous dimeric protein, that was able to bind its substrate.
Overcoming protein instability
In the previous sections different strategies were mentioned to enhance the stability and the homogeneity of purified proteins and in all cases the buffer composition was the solution.Still, the essential question how to determine the optimal buffer to make a protein feel happy in solution is not answered?Some rationales and experiences are listed above: different elution strategies for IMAC purifications, the usage of frequently used buffer agents and a literature research for established purification protocols of related proteins.However, in many cases these approaches do not solve the problems occuring during the purification.But, is there a general methodology to overcome the problems?Unfortunately, the answer is as frustrating as challenging -there is not a general panacea around for the right buffer composition of a protein.If a new buffer needs to be found, trial-and-error approaches have to be applied.A lot of different parameters are influencing the state of a protein, i. e. the buffer agent, the salt concentration, presence of metal ions with different valences, the hydrophobicity, and even the temperature of the buffer.The analysis of the protein in different buffers can be done by SEC and/or light scattering experiments.However, screening of all the different variables is very labor-and cost-intensive, and timeconsuming, moreover only combinations of two or more additives might be sufficient to enhance the solubility and homogeneity of the protein.Therefore high-throughput methods are needed that handle a lot of different conditions simultaneously using as few as possible protein sample.
Buffer screen -Enhancing the solubility
Many publications are available suggesting methods for a solubility screening to allow the crystallization of initially inhomogeneous, aggregating protein samples (Jancarik, Pufan et al. 2004;Zaitseva, Holland et al. 2004;Collins, Stevens et al. 2005;Sala and de Marco 2010;Schwarz, Tschapek et al. 2011).In all of these methods aggregating protein samples are mixed with commercially available crystallization screens incubated for a period of time, and analyzed for precipitation visually using a light microscope.Screening conditions resulting in no precipitations are analyzed upon their composition, and protein samples are further examined with respect to their solubility and homogeneity under these conditions by SEC or light scattering experiments.This technique allows high-throughput screening in a 96-well format, where an automated pipetting system mixes only 50-200 nL of protein solution with 50-200 nL of buffer solutions to minimize the needed protein sample and increase the screening efficiency.Several buffer screens are commercially available that cover many different buffer agents, salt concentrations and other buffer parameters (i.e. from Hampton Research, Molecular Dimensions, Sigma, Jena Bioscience, Qiagen).After a solubility screening was applied, we were able to stabilize a previously unstable protein at concentrations above 3 mg/mL (see above "Protein precipitations during IMAC" and Figure 2D) at concentrations of up to 100 mg/mL for weeks (Schwarz, Tschapek et al. 2011).Typically, the new buffer (50 mM citrate, 50 mM LiCl 2 , pH 6,00) should be used during the entire purification procedure starting with cell lysis.In the described case, the new buffer contains citrate, which is incompatible with an IMAC purification.Therefore the protein was immediately mixed with the new buffer after the elution of the IMAC column.
Size-exclusion chromatography versus light scattering experiments
Size-exclusion chromatography (SEC) and light scattering experiments (LS) are very helpful tools to analyze the homogeneity (Collins, Stevens et al. 2005) and the molecular mass of proteins; however both of them have advantages and disadvantages compared to each other.In SEC experiments proteins are separated based on their hydrodynamic radius by partitioning between a mobile phase and a stationary liquid within the pores of a matrix.All SEC columns are characterized by the volumes V 0 , the liquid volume in the interstitial space between particles, V i , the volume contained in the matrix pores and V T , the total diffusion volume (V 0 + V i ) (Regnier 1983a;Regnier 1983b).In dependency of the hydrodynamic radius molecules are eluting at specific retention volumes in between V 0 and V T with big molecules eluting first.After a calibration of a SEC column with proteins of known molecular weight (i.e. Sigma-Aldrich, "Kit for Molecular Weights") the molecular mass of the protein of interest can be roughly estimated; the elution volume is correlated to the log 10 of the molecular weight (therefore, the hydrodynamic radius is considered to be proportional to the molecular weight).However, many extraneous mechanisms such as adsorptive, hydrophobic and ionic effects are further limiting the correlation between the retention volume and the molecular mass giving sometimes rise to wrong estimations.Light scattering (LS) experiments can be applied to overcome these disadvantages and investigate the exact molecular weight of the protein sample.The rayleigh scattering of particles of monochromatic light depends directly on the molar mass of the particle.If you know the exact number of particles you can calculate the average molar mass of these particles.This technique is very powerful when used online after separation of the protein depending on their hydrodynamic radius, meaning SEC.This technique is always superior to normal SEC but requires special equipment and especially more time.However, if the protein fold is not really globular or other effects occur (see above: ionic, hydrophobic, etc.) assumption on size and oligomeric state based on SEC is not possible at all.For protein crystallization information about monodispersity, which can be provided by such an experiment, is an additional benefit.
Analysis of the homogeneity -High-throughput methods
Despite the development of various sophisticated methods, a bottleneck of homogeneity screening is high-throughput analysis.As mentioned above, proteins need to be analyzed by SEC and/or LS experiments after visual read-out of the protein-buffer droplets.Therefore, fluorescence-based solubility screens were developed that allow the high-throughput analyzes of many samples in a 96-well format (Ericsson, Hallberg et al. 2006;Alexandrov, Mileni et al. 2008;Kean, Cleverley et al. 2008).All these assays use fluorophores as reporters of the protein state.A suitable fluorophore is, for example, Sypro Orange, which exhibits different fluorescence properties as a function of its environment.This dye is almost dark in hydrophilic environment, however, after binding to hydrophobic molecules, it emits light at 570 nm.In inhomogenous and unfolded protein samples hydrophobic amino acids are exposed to the surface of proteins (Murphy, Privalov et al. 1990).An increase in the fluorescence signal of Sypro Orange correlates therefore with unfolding events of proteins.The homogeneity screening can be performed in basically two ways: temperature-or timedependent.For the first setup the protein sample is heated gradually in distinct steps (i.e. 1 °C) and the emission is monitored at 570 nm.Hereby, a "melting" temperature is determined, which is characterized by 50% fluorescence of the maximal fluorescence at the highest temperature; the higher the melting temperature, the higher the stability of the protein (Ericsson, Hallberg et al. 2006).Secondly, the protein sample is incubated at a specific temperature (i.e. 40°C) and the fluorescence is measured for a period of time.The "half-life" time, at which 50 % fluorescence of the maximum fluorescence in one sample is detected, can be compared to all buffer conditions.In Figure 4 an example of the timedependent approach is shown.Here, the protein is incubated in different buffers with various salt concentrations.The emission of Sypro Orange is recorded each minute at 570 nm.An analysis of all time-dependent fluorescence plots indicates that the protein is most stable in buffers containing 125 mM NaCl but unfolds fast in 1 M ammonium sulfate.These assays result in qualitative indications about a favourable environment of proteins that enhance the stability.Ericsson et al. proved the concept of this method by applying it to different proteins (Ericsson, Hallberg et al. 2006).The stability optimization yielded a twofold increase in initial crystallization leads.Moreover these assays enable the search for putative ligands of the protein.Upon binding of a substrate in the binding pocket or an inhibitor, the stability of the protein increases, which can be detected experimentally.
Protein crystallization: Introduction
Protein crystals suitable for X-ray diffraction experiments and usable for subsequent structure determination are normally relatively large with a size of at least 10 to 100 m.In contrast to crystals of mineral compounds, protein crystals are rather soft and sensitive to mechanical stress and temperature fluctuations.These properties are due to weak interactions between single proteins within the crystal, their high flexibility as well as the size of the macromolecules.The periodic network of building blocks is held together by dipole-dipole interactions, hydrogen bonds, salt bridges, van der Waals contacts or hydrophobic interactions.All of them have binding energies in the low kcal/mol range.Especially the limited number of crystal contacts and their directionality are the largest difference to the high interactions generally observed in salt crystals.An example of the interactions within a protein crystal is shown in Figure 5.This picture highlights the main pitfalls in protein crystallization.A protein is a highly irregular shaped and flexible macromolecule which allows weak and stinted interactions at very specific locations of its surface.All vacuity is filled with buffer, in general not contributing to any kind of interactions between the protein molecules.Figure 5A shows a protein of around 30 kDa, which crystallizes in a rather small unit cell (shown in black).Only one protein monomer is located in the asymmetric unit of the unit cell, the other shown monomers represent symmetry related proteins.Figure 5B highlights the three-dimensional packing of protein molecules within a crystal.The flexibility as well as the other mentioned characteristics of proteins are responsible for the problems occuring during crystallization trials and despite extensive efforts not every protein is suitable for crystallization.If one cannot generate crystals one has to move back several steps and change the properties of the protein, e. g. surface properties by mutation of single amino acids, truncation of the protein or sometimes only changing buffer compositions that result in a more suitable protein for crystallization (see Figure 1 and also below).There are several prediction servers available that help choosing the 'right' protein and modification (Linding, Jensen et al. 2003;Goldschmidt, Cooper et al. 2007).However, protein crystallization still remains an empirical approach, sometimes called voodoo, while crystallography is science.
Phase diagram
The conditions or protocols for obtaining good crystals are still poorly understood and despite all progress and efforts protein crystallization is a trail-and-error approach.However, a step towards a better understanding of crystal growth can be achieved by analyzing the phase diagram of a protein-water mixture.The phase diagram is a simple illustration to help understanding how protein crystals are formed.Mostly, it is shown as a function of two ambient conditions that can be manipulated, i. e. the temperature and the concentration.Three-dimensional diagrams (two dependent parameters) have also been reported (Sauter, Lorber et al. 1999) and even a few more complex ones have been determined as well (Ewing, Forsythe et al. 1994).Figure 6 shows a schematic phase diagram for a protein solution as a function of protein concentration and precipitant concentration.The phase diagram is broken down into four distinct zones (Rosenbaum and Zukoski 1996;Haas and Drenth 1999;Asherie 2004): 1. Undersaturated zone: Under these condition the protein will stay in solution as neither the concentration of the protein nor of the precipitant is high enough to reach supersaturation.2. Precipitation zone: Is the protein concentration or the precipitant concentration too high, the protein precipitates out of solution; this kind of solid material is not useful for crystallographic studies.
3. Labil zone: This is the most important configuration of the two parameters, as nucleation and initial crystal growth take place under these conditions.4. Metastable zone: After initial crystals are formed and start growing in the labil zone, protein concentration decreases in the drop and the metastable zone will be reached.
Here the crystal can grow further to its final maximum size.
Fig. 6.A basic solubility phase diagram for a given temperature (adapted from (Rupp 2007).
The curve separating the undersaturated zone from the supersaturated one is called solubility curve.If conditions are chosen below the solubility curve, the protein will stay in solution and never crystallize.This means when a protein crystal is placed in a solvent, which is free of protein, it will start to dissolve.If the volume of the droplet is small enough it will not dissolve completely: it will stop dissolving when the concentration of the protein in the droplet reaches a certain level.At this concentration the crystal loses protein molecules at the same rate at which protein associate to the crystal -the system is at equilibrium.Determination of the solubility of the protein of interest might be a helpful information at the beginning of crystallization experiments.This can be done in a twodimensional screen varying for example ammonium sulfate concentrations as well as the protein concentration.
Crystallization techniques
Crystallization is a phase transition phenomenon.Protein crystals grow from a supersaturated aqueous protein solution.Varying the concentration of precipitant, protein and additives, pH, temperature and other parameters induce the supersaturation.However, as mentioned before, prediction of this kind of phase diagrams is a priori impossible.Protein crystallization can be divided into two main steps: 1. Generating initial crystals: 'Searching the needle in a haystack' 2. Empirical optimization of these crystallization condition The first step is mostly based on experiences from other crystallization trials with different proteins.Nowadays several supplier offer crystallization screens that contain solutions for initial experiments that were used successfully in the past for crystallization trials (Jancarik and Kim 1991), so-called "sparse matrix screens".There are also some trials around to use more systematic approaches (Brzozowski and Walton 2000) to get more information about solubility prior and simultaneous to crystallization (incomplete factorials, solubility assays).Both kinds of screens can be applied to different crystallization techniques.Fig. 7. Crystal optimization.First steps in crystal optimization are shown.Initial protein crystals look weak and fragile, after screening around this initial buffer composition crystal evaluation by eye results in less fragile, homogeneous looking crystals.However, diffraction quality was poor.Therefore an additive screening was performed that resulted in a different crystal form.These crystals finally were able to diffract X-rays to a reasonable resolution.
A lead/hit in that initial step might not be a 'real' crystal rather than a crystalline precipitate or just phase separation.In the next step, fine-tuning the buffer composition further optimizes this hit.Varying pH, salt concentration, type and concentration of precipitant and protein concentration are expected to yield larger and hopefully also better-diffracting crystals.In this step, the chemicals used are much more defined and therefore it is a more systematic than empirical screening (see Figure 7).
Vapor diffusion
The most popular and simplest technique to obtain protein crystals is the vapor diffusion method either in the sitting or hanging drop variant (see Figure 8).For both a defined volume (mostly < 1µl) of protein solution is mixed with an equivalent volume of screening solution and then equilibrated against the original precipitant/screening concentration.During this equilibration, the vapor pressure of the solution rises as the protein crystallizes (protein in solution lowers water activity) while the water evaporates to maintain equilibrium, which causes the precipitant concentration to rise.Therefore, if the crystal growth is sensitive to the precipitant concentration, vapor diffusion can rapidly force the mixture to unstable conditions where growth and nucleation are too rapid.This is the main disadvantage of vapor diffusion: Growing large crystals might be problematic!
Micro batch method
In this set-up the protein solution is mixed with screening solution at concentrations required for supersaturation right at the beginning of the experiment.Typical drop sizes of micro batch experiments ranges from 1-2 µl.The drop is then covered with oil, which acts as an inert sealing to protect the drops during incubation from evaporation (see Figure 8).
Micro dialysis
Dialysis is another way to change the buffer composition and increase its concentration in the crystallization experiment gradually (see Figure 8).Micro-dialysis buttons are exposed to different screening buffers.This method requires rather high amounts of protein but might yield large crystals.After obtaining initial crystal hits in a commercial screen the tough part of crystal optimization starts.By varying pH, salt concentration, temperature, precipitant concentration or protein concentration these initial crystals should be reproduced and become larger, more regular shaped or are simply growing faster.A further improvement of crystal quality might be achieved by the addition of small amounts of so called 'additives'.At this point basically each chemical compound might be sufficient to improve the crystal quality.Luckily, there are some preferable working additives, which have been proven to produce better crystal in more than one case.Especially compounds that are known to reduce undirected interactions in proteins like organic solvents, i. e. DMSO or phenol, or detergents and reducing agents are very often used at this stage and helpful to force more homogeneous well diffracting crystals.
Crystal nucleation
There are two fundamental steps during protein crystallization: Nucleation and crystal growth.If one cannot obtain single crystals of adequate quality for analysis, this is generally a consequence of problems associated with the growth phase (see above).But failure to obtain any crystals at all or failure to obtain single, supportable nuclei reflects difficulties in the nucleation step.Therefore control of nucleation is a powerful tool to optimize protein crystals or sometimes it is the only way to get crystals at all.Nucleation can take place either homogeneous meaning in the bulk of the solution, when the supersaturation is high enough for the free-energy barrier to nucleus formation to be overcome or heterogeneous mostly by solid material in the solution.This can also occur even when the supersaturation is not achieved.Therefore in order to control nucleation one has to work with highly clean solutions to avoid nucleation by the second mentioned possibility.The nucleation zone can be bypassed by insertion of crystals, crystal seeds or other nucleants to the protein/precipitant mixture.Addition of crystals or tiny fragments of crystals is called seeding.This method is then subdivided into macro-and micro-seeding dependent on the size of the nucleant added.In macro-seeding experiments one single, already well-formed but small crystal is placed into a new crystallization solution at lower saturation.Microseeding in contrast requires small fragments of a crystal or almost invisible microcrystalline precipitate.These 'seeds' are then transferred into a fresh crystallization solution either by a seeding wand which is dipped into the microseed mixture to pick up seeds and then touched across the surface of the new drop or by a animal whisker or hair that is stroked over the surface of the parent crystal to trap the nuclei and then is drawn through the new drop.As this method also enhances the speed of crystal growth it can be used with sensitive substrate that undergo decomposition over time.Oswald et al. proved this in 2008 by solving the structure of ChoX from Sinorhizobium meliloti in complex with a highly hydrolyzing substrate, acetylcholine (Oswald, Smits et al. 2008).In classical vapor diffusion experiments crystals appear after four weeks but data showed only little electron density in the ligand-binding site and turned out to result from a choline bound instead of acetylcholine.Hydroxylation was favored due to the relatively long time for crystal growth but also because of an acetic pH in the crystallization set-up.To circumvent these problems accelerated crystal growth was required.In this case micro-seeding results in crystals suitable for data collection in less than 24 hours.Recent years more effort in nucleation control yielded in fancy materials that can be used as nuclei for crystals.These methods use the second way of nuclei formation, as a solid material is introduce into the crystallization solution as an 'universal' nucleant (Chayen, Saridakis et al. 2006).There have been several substances that have been tried more or less successful.Some have been useful for individual proteins, but mostly they were not applicable in general (McPherson and Shlichta 1988;Chayen, Radcliffe et al. 1993;Blow, Chayen et al. 1994). In 2001, Chayen et al. proposed the idea of using porous silicon whose pore size is comparable with the size of a protein molecule.In theory such pores may confine and concentrate the protein molecules at the surface of the silicone and thereby encourage them to form crystal nuclei (Chayen, Saridakis et al. 2001).These nucleants have made it to commercial availability (www.moleculardimensions.com) and have proven to be suitable for different kinds of proteins and even membrane proteins that have not been possible to crystallize before formed nice crystals in the presence of these nucleants.
Cryoprotection
Exposure of a protein crystal at room temperature results in dramatic radiation damage due to radicals formed by the ionizing X-ray photons.To reduce that harmful disintegration of the protein crystal the crystal is cooled to 100K with the help of liquid nitrogen (Low, Chen et al. 1966;Hope 1988;Rodgers 1994;Garman 1999).However, it is common for the cooling process to disrupt the crystal order and decrease diffraction quality.Thus, the crystal must be cooled fast so that the water in the solvent channels is in the vitreous rather than in the crystalline state at the end of this procedure.As for pure water this cooling has to take place very quick (10 -5 s, (Johari, Hallbrucker et al. 1987), some water molecules can be replaced by a cryoprotective solution prior to cooling (Juers and Matthews 2004).This exceeds the time window for up to 1-2s (Garman and Owen 2006) however, finding a good 'cryoprotectant' for a special protein crystal again involves substantial screening.Once flash frozen in liquid nitrogen, the crystal must be kept below the glass transition temperature of the cryobuffer at or below 155K at all times (Weik, Kryger et al. 2001).
What can you do when all efforts did not succeed in crystals? 4.5.1 Buffer composition -Again!
The choice of the right buffer used for crystallization experiments is very crucial.As shown above, every protein needs its own buffer composition to feel kind of happy in this aqueous artificial environment.Especially as high protein concentrations (>10mg/ml) are required for crystallization, one might has to test several buffer compositions again (see also Figure 1).As a rule of thumb you should obtain around 50% of clear drops immediately after mixing protein and buffer solution.If you detect drastically more precipitation in your drops you should think first about lower protein concentration but of course secondly about changing your buffer system again.
How to obtain a rigid protein suitable for crystallization?
To overcome the problem of flexibility of some regions in the protein addition of ligands is often a very powerful tool to fix the protein in a single conformation that is more favorable for crystallization.A good example for this strategy is the crystallization of so-called substrate binding proteins (for a recent review see (Berntsson, Smits et al. 2010)).These proteins catch their substrate in the periplasm of bacteria or on the outer membrane of archaea and then deliver it to their cognate transport system located in the membrane.The mechanism of substrate binding is quite well understood.These binding proteins all consist of two domains, which rotate towards each other during the binding event.In solution without substrate they are quite flexible and NMR-studies proved a equilibrium between open and closed conformation (Tang, Schwieters et al. 2007).Analysis of all available structures for this class of proteins showed that more than 95% were crystallized with a ligand bound (Berntsson, Smits et al. 2010).Thus, a stabilization of the two domains seems to simplify crystal contact formation dramatically.Although people always want to obtain a functional conformation of the protein in their crystal structure, it is sometimes helpful to think about how to stop the protein from doing its job.A non-functional protein is in general less flexible and fixed in one conformation.One example for successful implementation of this strategy is the crystal structure of NhaA from Escherichia coli solved in 2005 (Hunte, Screpanti et al. 2005).Here, Hunte et al., downregulated the protein activity by working at an acidic pH of 4.Although the protein shows almost no activity at this pH the structure reveals the basis for mechanism of Na+/H+ exchange and also its regulation by pH could be understood.
Rational protein design for crystallization: Surface engineering
The first example of rational protein design that yielded a good diffracting protein crystal is given by Lawson et al. in 1991(Lawson, Artymiuk et al. 1991).They compared amino acids involved in crystal contact formation of the rat ferritin protein L. (which is highly homologous to human ferritin H, the target protein) with the amino acids present at that position in human ferritin H.A replacement of Lys86, found in the human sequence, with Glu, which occurs in rat, recreated a Ca 2+ binding bridge that mediates crystal contacts in the rat ortholog.As this method was successful for several other proteins (McElroy, Sissom et al. 1992;Braig, Otwinowski et al. 1994;Horwich 2000), a general protocol was required.
GH T S S
As the enthalpy values of intermolecular interactions in a crystal lattice are rather small (see above), crystallization is very sensitive to entropy changes of both protein and solvent.The formation of ordered protein aggregates carries a negative entropy term.This can only be overcome by positive entropy from the release of water bound to the protein.
However, large hydrophilic residues (e.g.lysines, arginies, glutamates, glutamines) exposed on the protein surface need to be ordered.Since they are rather flexible this can cause problems.This can be overcome by mutating large amino acids into smaller ones, for example alanines.Among these large amino acids lysines and glutamates play a particular role, as they are always (with only very few exeptions) located on the protein surface (Baud and Karlin 1999).Both lysines and glutamates are typically disfavored at interfaces in protein protein complexes (Lo Conte, Chothia et al. 1999), therefore it is rather straight forward to assume that lysine and glutamate to alanine mutants are good targets for protein crystallization if wildtype protein hardly forms crystals.However this also means that you have to go several steps backwards on road to a protein structure determination (see Figure 1).
Affinity tag removal: Philosophic question???
Another variant in protein crystallization nowadays is the affinity tag used for purification of the desired protein.The decision about position and choice of the affinity tag are mostly made at the beginning of the long way to a crystal structure (see Figure 1).However, it becomes crucial again at the crystallization step.In general most people like to remove the tag before crystallization to prove a physiological conformation.But, there are examples where the tag played a pivotal role in crystallization (Smits, Mueller et al. 2008a).The crystal structure of the octopine dehydrogenase from Pecten maximus is shown in Figure 9 (Smits, Mueller et al. 2008b), with the interactions sides/crystal contacts highlighted in green.
In Figure 9A contacts look quite similar to that presented in Figure 5. However when having a closer look on the his-tag, you recognize that it is located in a cavity formed by another monomer of that protein.In that cavity it can perform several hydrogen bonds with amino acids from the other monomer resulting in a very strong interaction which yields good quality crystals.
Crystallization using antibody fragments
A number of ways to stabilize proteins for crystallography have been developed, for example genetic engineering, co-crystallization with natural ligands and reducing surface entropy (see above).Recently, crystallization mediated by antibody fragments has moved into the focus of crystallographers especially to obtain crystals of membrane proteins (Ostermeier, Iwata et al. 1995;Hunte and Michel 2002).Membrane protein crystallization is even tougher compared to soluble proteins, because of the amphipathic surface of the molecules.As they are located in the lipid bilayer most of their surface is hydrophobic and must be covered to keep them in solution.This is maintained by detergents.The detergent micelles cover the hydrophobic surface and therefore this area is no longer available to form crystal contacts.Crystal contacts can only be formed by the polar surfaces of these proteins.As many membrane proteins contain only relatively small hydrophilic domains, a strategy to increase the probability of getting well-ordered crystals is required.Antibody fragments can play this role.They can be designed for binding at specific regions in the protein and then function as additional polar domain in the membrane protein complex (for example see (Ostermeier, Iwata et al. 1995;Huber, Steiner et al. 2007).
Conclusion
For what reason do we effort so much work on good quality crystals?Single good quality crystals constitute an essential prerequisite for structural investigations of biological macromolecules using X-ray diffraction.The harder one works on crystal quality the easier the determination of a reasonable atomic model of the molecule of interest becomes.The vast majority of problems encountered in crystal structure determination can typically be traced back to data-quality issues caused by crystal imperfections.Consequently, although primary focus of structural biology is on the macromolecule that makes up a crystal, there is also considerable interest in the physical properties, nucleation and growth of the crystals themselves.Statistics of various Structural Genomics Centers proved that protein crystallization is still despite all the progress in the technology of crystallization robotics is still a rather tough field in biological science.Success rate ranges for small prokaryotic proteins from 10-30% and decreases dramatically to a few percent for human proteins.The struggle obtaining crystals for protein structure determination is justified.After all efforst looking at electron density and subsequent the protein structure is still one of the most intriguing as well auspicious parts in structural biology
Fig. 2 .
Fig. 2. Elution fractions of an IMAC.The protein was eluted via a linear imidazole gradient from 10 to 500 mM and the absorption at 280 nm was recorded.The elution fractions were collected and photographed.A: IMAC chromatogram of the his-tagged protein.Elution fractions containing the desired protein (indicated by a bar) are collected and shown in B -D. B and C: Elution fractions of the protein in 50 mM Tris-HCl, 150 mM NaCl, pH 8.0 immediately and 10 min after the elution, respectively.D:The elution fractions were immediately mixed in a 1 to 1 ratio with a buffer that enhances the protein stability (50 mM citrate, 50 mM LiCl, pH 6.00) evaluated during a solubility screen.
Fig. 3 .
Fig. 3. Size exclusion chromatograms (UV 280nm) of proteins in different buffers.A: The homogeneity of a protein was analyzed in two different buffers; continuous line: 50 mM Tris-HCl, 150 mM NaCl, pH 8.00; dotted line: 20 mM Hepes, 150 mM NaCl, pH 7.00.B: The protein was eluted of the IMAC column either with imidazole (continous line) or with histidine (dotted line), concentrated and applied to the SEC.
Fig. 4 .
Fig. 4. Time-dependent stability optimization screen using Sypro Orange as reporter.The protein is diluted 1:50 into each test buffer containing Sypro Orange, excited with 490 nm and the fluorescence at 570 nm is measured for 60 minutes automatically with a PLATE READER (Fluorostar, BMG Labtech).Normalized fluorescence is plotted against the time.
Fig. 5 .
Fig. 5. Example of the packing within a crystal.A: The unit cell is shown in black, crystal contacts are highlighted with purple circles and lines.B: Three-dimensional crystal packing of a different protein.The unit cell as well as one protein monomer are depicted in green.
Fig. 8 .
Fig. 8. Protein crystallization techniques.Schematic representation of a) vapor diffusion, b) micro batch and c) micro dialysis crystallization techniques widely used for crystal growth (adapted from (Drenth 2006)).
Fig. 9 .
Fig. 9. Crystal contacts in OcDH protein.A: Overall view on two monomers.Surface Crystal contacts are highlighted in the green circles.B: Zoom in on the His-tag of one monomer.The his-tag of one monomer in the crystal structure is located near the binding site in a deep cavity formed by the other monomer.Therefore it is able to form several hydrogen bonds (highlighted in green) with side and main chains of the other protein but also with the ligand bound in this binding site (orange).
Table 1 .
Overview of expression systems.Summarized are the advantages and disadvantages.
The concept Derewenda et al. proposed in 2004 is based on the general equation for the free energy that drives protein crystallization: | 2018-02-23T01:36:46.398Z | 2012-01-13T00:00:00.000 | {
"year": 2012,
"sha1": "6ba3786c80c6d907be28b8698c744e0db692dc35",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5772/28014",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "8020641b7e49a80f6b348acdb463f53f24304023",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
244508849 | pes2o/s2orc | v3-fos-license | Immunohistochemical features of the collagen ’ formation in the uterus of fetuses with a gestational term of 21-28 weeks from mothers , whose pregnancy was complicated by preeclampsia of different stage of severity
In the following article we are going to provide the results of research on uterus’ structure in case of fetuses with a gestational term of 21-28 weeks from mothers, whose pregnancy was complicated by PE of different severity stage (15 cases) comparing to one in case of fetuses from mothers with a physiological pregnancy (15 cases). All fetuses have died intranatally as a result of an acute disorder in uterine-placental and uterine-fetal circulation. The research methods: macroscopic, organometric, histological, immunohistochemical, morphometric and statictical. By applying an organometric method we have revealed a probable decrease of indexes of weight, length and thickness of the uterine’ wall in case of fetuses from mothers with PE comparing to ones in case of fetuses from healthy mothers. The observative histological research had not revealed any significant changes in the strucutre of the organs’ wall in case of fetuses from the study groups. Thus, all organs were represented by mucous, muscular and serous membranes with a clear boundary between them. The comparative morphometric research had revealed the following features of the uterus’ wall structure in case of fetuses from mothers with PE. The indexes of thickness of all structural
829
components of the organ's wall were clearly decreased comparing to ones in case of fetuses from mothers with a physiological pregnancy. In the uterine endometrium of fetuses from mothers with PE of moderate stage of severity the features of proliferative (hormonal) activity are taking place, however, in the meanwhile, in case of organs of fetuses from mothers with PE of severe stage of severity we could notice a probable decrease of glands' number as well as a lack of a proliferative activity in them (hypoplastic changes). By applying immunohistochemical method with using MCAT to CD 95 we have disclosed a probable increase of an apoptotic index in organs of fetuses from mothers with a complicated pregnancy towards one in case of fetuses from healthy mothers. Among the specific features of the uterine myometrium strucutre in case of fetuses from mothers with PE we could name the following ones: decrease of the vascular component percentage as well as increase growth of the connective tissue. Moreover, the structure of the connective tissue is represented mostly by the collagen of the III type, while, in case of organs of fetuses from healthy mothers the collagen of the I type prevails. In walls of the vessels of arterial type in case of uterus of fetuses from mothers with PE we could notice an increased glow of the collagen of the III type as well as a probable decrease of glow in case of collagen of the IV type. However, in case of organs' vessels of fetuses from healthy mothers we could notice an increse of glow of the collagen of the IV type. There is a fact, which attracts an attention to it. Namely, all aforementioned changes in the uterine wall in case of fetuses from mothers with PE are minimally manifested, when the mild course of disease took place, and was maximally manifested in case of severe one. All changes in the uterus in case of fetuses from mothers the different stages, reaches up to 80-85% [4,5]. All vascular and endocrine changes, that had been described in the mother-placenta-fetus system in case of this pathology, are leading to gross violation of implementation and formation of the fetal internal organs [6,7]. First of all, it applies to female genitals of fetuses [7,8]. It is commonly known, that the PE could be manifested in case of girls, who were born from mothers that had this disease during the pregnancy [9,10]. The modern literature also provides cases of development of the germinal function disorder as well as, even, primary infertility of offsprings from such mothers [10,11]. Girls, who were born from women with PE, are classified at a risk group towards development of complications of pregnancy and future childbirth [9,11]. However, despite of the problem's urgency, the immunohistochemical features of the uterus development in case of fetuses from mothers with PE have not been still studied.
The aim of research is to disclose main features of the collagen formation in uterus of fetuses with a gestational term of 21-28 weeks, who were born by pregnant women with preeclampsia of different stage of severity.
As the research material we have chosen 15 organs of fetuses from mothers, whose pregnancy was complicated by the PE of different stage of severity (group of comparison) towards 15 uterus of fetuses from mothers with a physiological pregnancy (main group). In the group of comparison we have provided the following distribution of fetuses relatively to the stage of the mother's PE severity: 5 fetuses from mothers with a mild course of the disease; 5 fetuses from mothers with PE of moderate stage of severity as well as 5 fetuses from mothers with PE of severe stage of severity. All fetuses had died intranatally as a result of an acute disorder of utero-placental and utera-fetal circulation on the gestational term of 21-28 weeks. The stage of the PE severity was aknown according to the medical documentation' data. Namely: we have evaluated indexes of the arterial pressure' level, amount of proteine in urine as well as a presence edema. The mothers of fetuses from the main group were healthy according to the medical cards of pregnant women.
After removal organs were examined, as well as main sizes of the fetal uterine were measured. From every organ there were 2-3 pieces cut so, that all layers of the organs could be on a section. The material was fixed in a neutral formalin buffer solution in the aim of reducing the effect on tissues. Afterwards it was performed on alcohols of increasing concentration. In 24-48 hours the material was filled with paraffin [12]. From the manufactured blocks there were 2-3 section made with a thickness of 3-5 μm, as well as it was stained by the histological (by hematoxylin and eosin) as well as histochemical methods (the Brache method, the Folgen-Rossenbeck method, the Schiff reaction).
In the aim of staining by histochemical methods pieces of tissues were fixed in the Carnois fluid (6 parts of an absolute ethanol, 3 parts of the chloroform, 1 part of the glacial acetic acid), which was prepared directly before the ovaries' fixation. At the end, the specimens were transferred to an absolute alcohol and were filled with paraffin.
In order to determine relative volumes of the main stractural components of the fetuses' uterine the sections were studied by morphometric methods.
The data was processed statistically on the personal computer by using following statistical packages "Excell for Windows", "Statistica 7.0. for Windows", "SigmaStat 3.1. for Windows"1 [13].
The results and discussion. In all cases the location of organ was typical. Namely: the body and the bottom of the uterus were situated in a pelvis major, while the cervix was situated in a pelvis minor. The fallopian tubes were extending from the lateral ends, while the ovarian ligament was attached to the posterior surface.
During macroscopic observation we have revealed the following data: the organs are pear-shaped, surface is smooth, grayish-bluish. On the section the organ's tissue is red with a moderate blood supply. On this stage of gestation uterus is represented by the body and the cervix. We have to notice, that the cervix occupies up to 3/4 of the general organ's length.
The ratio of organ's body length and cervix length of uterus of fetuses from healthy mothers was 1:2,7l while the same indexes in case of fetuses from mothers with a complicated pregnancy this ratio was 1:2,8.
The average organometric indexes of fetuses' organs from the study groups are presented in the Table1.
By analyzing data from the Table 1 we could lead to the conclusion, that the average indexes of weight, length and thickness of organs of fetuses from mothers with PE are clearly decreased relatively to ones in case of fetuses from healthy mothers. Moreover, there is a fact, which attracts our attention. Namely, that the indexes were minimally decreased in case of fetuses from mothers with a mild stage of the PE severity, and were maximally decreased in case of fetuses from mothers with a severe stage of the PE.
The observative microscopic research on specimens, that had been stained by histological methods have revealed, that the uterus' wall in all cases was represented by endometrium, myometrium and perimetrium. Moreover, the boundary between layers is clearly distinguished in case of fetuses from the main group as well as in organs of fetuses from mothers with PE of the mild and moderate stages of severity. In the uterus of fetuses from mothers with PE of the severe stage of severity the boundary between layers was unclearly distinguished, while in couple of cases was absolutely impossible for distinguish. The indexes of thickness of main structural components of the uterus' wall of fetuses from the study groups are presented in the Table 2.
The data from the Table 2 is revealing a fact, that the muscular layer is prevailing in all observations. It means, that the thickness of the muscular component reaches higher indexes comparing to ones in case of mucous and serous membranes. There is a fact, which attracts our attention. Namely, there is a clear decrease of indexes of the thickness of main strucutral elements of the uterus' wall in case of fetuses from mothers with PE relatively to ones in case of fetuses from mothers, whose course of pregnancy was physiological one.
Moreover, the maximal decrease of indexes could be noticed in case of fetises from mothers with the PE of the severe stage, while the minimal one could be noticed in case of the mother's PE of the mild severity. The structure of the musous membrane of organs was represented by superficial and deep layers, that correspond with basal and functional ones in case of an adult woman. By applying immunohistochemical methods with a use of MCAT to CD 95 we have revealed a number of an apoptotically altered cells in the endometrium of uterus of fetuses from the study groups. The average data on an apoptotic index is presented in the Table 3. Table 3 The By analyzing data from the Table 3 we could come up with the following solution.
The data from the Table 4 is disclosing the fact, that there is prevalance of the collagen of the III type in a strucutre of uterus' myometrium in case of fetuses from the study groups.
In organs of fetuses from mothers with a complicated pregnancy we could notice a clearly increased glow of both of the collagen of the I type, as well as one of the III type. Moreover, the intensity of glow varies according to the mother's PE's stage of severity. Namely, the maximal indexes are reached in case of fetal uterus of fetuses from mothers with PE of the severe stage of severity, while the minimal ones are reached in case of mothers with PE of the mild stage of severity. Table 4 The conv.un.opt.dens., in case of fetuses from mothers with PE of severe stage of severity -0,083±0,003 conv.un.opt.dens. By analyzing these data, we could come up with the following conclusion: the intensity of glow of the collagen of the IV type in organs of fetuses from mothers with a complicated pregnancy is clearly decreased relatively to one in case of fetuses from healthy mothers.
The perimetrium in all cases is represented by loose fibrous connective tissue, which is sometimes fused with mesothelium.
Thus, in this article we have presented histochemical features of the collagen formation in the uterus wall in case of fetuses with a gestational term of 21-28 weeks. We have compared the uterus wall structure in case of fetuses from mothers with physiological pregnancy from one side, as well as in case of fetuses from mothers, whose pregnancy was complicated by PE of different stage of severity from the other one. It was postulated, that in case of organs' wall of fetuses from mothers with PE oppositely to organs of fetuses from healthy mothers the organometric indexes likewise indexes of thickness of main structural components were clearly decreased. In the endometrium of uterus of fetuses from mothers with a complicated pregnancy we could notice an increased apoptotic index. All changes, that were mentioned above, are determined mainly by vascular disorders in the mother-placentafetus system, that take place in case of this pathology [14,15]. In the endometrium of fetuses from mothers with a complicated pregnancy we could notice also changes of amount and activity of the glandular component, which could be explained by endocrine disorders both as in the placental tract, as well as in the organism of pregnant woman [16,17]. All changes, that have been postulated above could consequently lead to formation of the precancerous pathology and, even to the endometrial cancer [18,19,20].
In the myometrium of fetuses from mothers with a complicated pregnancy we could
1.
The indexes of weight, length and thickness of the uterus wall in case of fetuses from mothers with PE are clearly decreased relatively to ones in case of fetuses from healthy mothers.
2.
Under the prism of structure the uterus wall is formed correctly in all cases. In the uterus wall we could identify mucous, muscular and serous membranes. However, we could also notice, that the average indexes of thickness of the main strucutral components of the organ's wall in case of uterus of fetuses from mothers with PE are clearly decreased relatively to ones in case of fetuses from mothers with a physiological pregnancy.
3.
The endometrium of uterus of fetuses from the study groups has a typical 6. In the walls of vessels of the fetuses' organs from mothers with PE we have revealed a decrease of intensity of glow of the collagen of the IV type relatively to organs of fetuses from mothers with a physiological pregnancy.
7. The maximal decrease of indexes, that had been evaluated in this article, takes place in case of organs of fetuses from mothers with a severe stage of the PE severity, while the minimal one takes place in case of the mild course of the disease.
8. All changes of the structural components of the uterus' wall of fetuses from mothers with PE of different stages of severity, that had been revealed, could consequently lead to formation of the glandular hyperplasia, endometrial polyps, precancerous diseases as well as endometrial cancer. Moreover, it could also lead to the disorder of the germinal function in the female organism.
9. The structural changes in uterus of fetuses from mothers with PE are determined, first of all, by changes in the vascular bed of the feto-placental complex as well as by endocrine disorders in the mother's organism, that take place in case of this pathology.
The perspectives of the future research. To disclose immunohistochemical features of the collagen formation in case of uterus of fetuses from mothers with PE of different stage of the severity with a gestational term of 29-36 weeks as well as 37-38 weeks. | 2021-11-24T17:15:52.701Z | 2021-09-30T00:00:00.000 | {
"year": 2021,
"sha1": "c5482e82ff519bc65e2656dfb2a0cb3b46ffb6a3",
"oa_license": "CCBYNCSA",
"oa_url": "https://apcz.umk.pl/JEHS/article/download/35819/30112",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "854daa1e91147562c3144446bfd5d3854d6b4bcf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
252070579 | pes2o/s2orc | v3-fos-license | A lightweight hybrid CNN-LSTM model for ECG-based arrhythmia detection
Electrocardiogram (ECG) is the most frequent and routine diagnostic tool used for monitoring heart electrical signals and evaluating its functionality. The human heart can suffer from a variety of diseases, including cardiac arrhythmias. Arrhythmia is an irregular heart rhythm that in severe cases can lead to heart stroke and can be diagnosed via ECG recordings. Since early detection of cardiac arrhythmias is of great importance, computerized and automated classification and identification of these abnormal heart signals have received much attention for the past decades. Methods: This paper introduces a light deep learning approach for high accuracy detection of 8 different cardiac arrhythmias and normal rhythm. To leverage deep learning method, resampling and baseline wander removal techniques are applied to ECG signals. In this study, 500 sample ECG segments were used as model inputs. The rhythm classification was done by an 11-layer network in an end-to-end manner without the need for hand-crafted manual feature extraction. Results: In order to evaluate the proposed technique, ECG signals are chosen from the two physionet databases, the MIT-BIH arrhythmia database and the long-term AF database. The proposed deep learning framework based on the combination of Convolutional Neural Network(CNN) and Long Short Term Memory (LSTM) showed promising results than most of the state-of-the-art methods. The proposed method reaches the mean diagnostic accuracy of 98.24%. Conclusion: A trained model for arrhythmia classification using diverse ECG signals were successfully developed and tested. Significance: Since the present work uses a light classification technique with high diagnostic accuracy compared to other notable methods, it could successfully be implemented in holter monitor devices for arrhythmia detection.
I. INTRODUCTION
Cardiovascular disease is amongst the three main global causes of death. As reported by the World Health Organization, nearly 18 million people die each year from heart disease [1]. An electrocardiogram (ECG) is a method of measuring and recording the electrical activity of the heart. ECG signal is recorded non-invasively by placing multiple electrodes on particular points of the body. A normal heart rhythm consists of three main parts including P waves, T waves and QRS complex. The occurrence of any anomalies in the rhythm and/or heart rate indicates the existence of a disorder and is called arrhythmia [2]. Heart arrhythmia is one of the chronic diseases that many people are suffering from worldwide. In some cases, arrhythmias lead to heart stroke or sudden cardiac death and therefore endanger human life. An early and accurate diagnosis of life-threatening arrhythmias can be effective in saving lives [3]. People who are suffering from arrhythmias might have symptoms such as inadequate blood pumping, shortness of breath, fatigue, chest pain, and unconsciousness. However, its important to use a holter device for remote long-term heart monitoring for people with heart failure risk factors such as previous history in immediate family members or high blood pressure. Furthermore, for the analysis of long-term ECG recordings, the manual examination is exhausting and timeconsuming, especially in cases in which the real-time diagnosis is of matter. Therefore, in recent years, the use of automated computer-aided methods for diagnosis has been significantly increased and although several methods have been proposed, this matter still attracts the attention of scientists [4]. Recently, deep learning algorithms such as convolutional neural networks (CNN) as an automated computer-aided method have been frequently used for different biomedical tasks and yielded encouraging results [5]. The ability of deep learning networks to recognize patterns and learn important features from raw data makes them suitable for classifying ECG signals [6]. A quick search in the literature shows diverse methods proposed for arrhythmia classification. Most of the methods are based on heavy deep learning methods that use a high number of layers to detect arrhythmias. The high number of layers decelerates the training phase and later the evaluation phase resulting in sacrificing the real-time goal. Therefore, there is a need to develop a light deep learning model to classify ECG arrhythmias with high accuracy.
The purpose of this study is to develop an accurate light automatic diagnostic system that assists cardiologists by providing an intelligent deep learning method that is time-saving and cost-efficient and reduces the number of misdiagnosed arrhythmias. The proposed model deployed CNN and recurrent neural networks (RNN) such as long-term short memory (LSTM) for the detection of various cardiac arrhythmias. ECG recordings used in this study, were obtained from the MIT-BIH Arrhythmia Database and the Long-term AF database which are available from PhysioNet.As a summary, the main contributions and novelty of this paper are listed as follows: 1) LSTM along with CNN architectures has been deployed, because the prediction depends on the whole input sequence. 2) We focused on creating a lightweight model to automatically detect and classify 8 different arrhythmias as well as normal sinus rhythm. The arrhythmias that were discussed in this article are: atrial fibrillation, atrial flutter, ventricular bigeminy, Paced rhythm, Wolf Parkinson White syndrome (WPW), supraventricular tachyarrhythmia, ventricular trigeminy and ventricular tachycardia. The rest of the paper is organized as follows: Related works are presented in section II. The proposed system architecture is illustrated in section III. Experimental results are illustrated and then compared with results of other works in literature in section IV. Finally, in the conclusions section, the current condition of the suggested technique as well as the advantages and disadvantages of the work is represented.
II. LITERATURE REVIEW
Several researchers have carried out different approaches throughout the years for classification and interpretation of various cardiac arrhythmias [7]- [10]. Machine learning techniques as automated classification methods have boosted the classification accuracy in recent years. The study in [11] used ArrhyNet to detect and classify ECG signals and to overcome the issue of imbalanced dataset the utilized Synthetic Minority Over Sampling (SMOTE) technique. The accuracy of 92.73% was achieved for their classification model. Amongst numerous applied methods, deep learning techniques based on CNN models have been widely implemented due to their promising performance. One way of applying CNN models on ECG signals is by virtue of transfer learning. Authors in [12], proposed a perspective for accurate classification of rare arrhythmia types which are represented in 2-D image using transfer learning method, a pre-trained CNN model called VGG16 for extracting features and a v-SVM classifier for assorting. The accuracy of the their model for normal and arrhythmia signals is 87% and 93%, respectively, resulting in an average accuracy of 90.42% .Isin et al. also used a transfer learning approach and a pre-trained CNN model called AlexNet as a feature extractor. Then a simple back-propagation was applied in order to classify the signals into three different cardiac rhythms (normal, Right Bundle Branch Block, paced). This transferred deep learning approach functioned efficiently by reaching an accuracy of 92% on test dataset [13] . Mustaqeem et al. [14] proposed a method for differentiating between normal and diseased ECG signals obtained from the UCI database. The proposed approach for the selection of the most significant features of the ECG signals was a wrapper method, built on a random forest algorithm. Afterwards, cardiac arrhythmias were classified by means of three different SVM based techniques including one-against-all (OAA), oneagainst-one (OAO), and error-correcting codes (ECC). After evaluating the mentioned techniques, the OAO method proved to be best suited for classifying the ECG records into 16 categories, by achieving an accuracy of 92.07% when using a 90:10 data split ratio . Hannun et al. [15] developed a 34-layer end-to-end deep neural network for categorizing 12 rhythm classes. The DNN model was conducted on a novel vast single-lead ECG database and resulted in an encouraging AUC of 0.97. Authors in [16] executed a segmentation operation on the ECGs existing in the MIT-BIH database.They broke down the large records into segments containing 200 samples. Subsequently, three decision trees, random forest and logistic regression multi-class classifiers were used in order to classify 3 cardiac rhythms. The average rhythm detection time is about 1 second resulting in online and real-time performance. Eventually, random forest showed more accurate and stronger performance by achieving 88.7% accuracy and 92.5 precision. Authors in [17] used KecNet model, which contains a CNN structure with a modified convolutional layer and a symbolic parameter extraction architecture in the feature extraction part of the model to classify arrhythmias. They achieved the accuracy of 99.31%. Authors in [18] augmented a set of timeseries transformations of the original signals to the original dataset and used a 1-D CNN model to classify arrhythmias using the original and transformed database.They managed to attain an accuracy of 99% without overfitting the model.
III. ARRHYTHMIA CLASSIFICATION METHODOLOGY
In the arrhythmia detection and classification method proposed in this study, first a pre-processing is done to make the data ready to fed into deep neural network algorithm and at the end the training and testing procedure using deep neural network algorithm is accomplished on data . The pre-processing section is made up of following steps: noise elimination and data resampling. The research is carried out in Python programming language in JupyterLab from the Anaconda distribution of Python 3.8.11 on a system with Intel Core i7 7th Gen processor with NVIDIA GeForce GTX 1070 Ti using TensorFlow and TensorFlow-GPU 2.3.0 packages.
A. ECG Dataset
In this study, the MIT-BIH arrhythmia database and the long-term atrial fibrillation (LTAF) database were used for train and test purposes of the proposed framework for arrhythmia classification [19]. The MIT-BIH arrhythmia database consists of 48 half-hour ECG recordings that were obtained from 47 subjects, studied by the Beth Israel Hospital Arrhythmia Laboratory. Unlike the conventional ECG recordings that are recorded by 12 leads, this database contains heart signals which were recorded by two leads (usually MLII and V1 leads) with a 360 Hz sampling frequency [20]. The LTAF database includes 24-hours ECG signals that were obtained from 84 subjects with paroxysmal atrial fibrillation. ECG signals were recorded synchronously from two leads with 128 Hz sampling frequencies [21]. These ECG recordings were annotated with details such as rhythm type, beat type, peak locations and onset and offset of a waveform by two or more professional cardiologists. These annotations were first extracted from the signals and then used in the train and test process. Each subject's ECG recordings may contain different types of arrhythmias, so by using the rhythm type annotations, all arrhythmias were excerpted from all subjects.
B. Preprocessing 1) Noise filtering: ECG signals are usually corrupted by different types of either low-or high-frequency noises such as baseline wander (BW), power line interference, electromyography (EMG) noise and electrode motion artifact noise. Different types of filters can be applied to remove these noises. Respiration and subjects movement are the main cause of BW which is a low-frequency artefact in ECG signal recordings of a subject. In this research, following previous works, a median filter with 200 ms and 600 ms width is used [2]. The median filter is a non-linear digital filtering technique used for noise removal of images and signals whilst preserving the useful details of the signal or the image. Each record was then normalized to an amplitude range of [−1, +1].
2) Resampling: The MIT-BIH ECG recordings were digitized at 360 samples per second and the ECG recordings in the LTAF database were digitized at 128 samples per second. Therefore in order to make use of both databases a resampling technique is being use to downsample the signals in MIT-BIH dataset. So after the resampling process the frequency of all the records are 128 Hz.
C. Segmentation
Segmentation of the ECG recordings is a step toward homogenizing the length of the data that is going to be fed to the model. With the sampling rate of 128 Hz and an average cardiac cycle of 0.8 second, segments with 500 samples (3.9 seconds) seem appropriate, since most arrhythmias appear within this length. Segments were extracted in an overlapping manner. The segmenting window slides through the records and produces the sections. After this step, all ECG segments from both databases are combined together. As shown in Table I segments related to normal and atrial fibrillation classes were excessively high. Therefore in order to eliminate the side effects of this imbalance, evaluation metrics during both the training and testing stages were weighted by the inverse of the size of each class.
D. Proposed model architecture
This paper aims to introduce a high-accuracy deep learning technique based on the combination of CNN and LSTM architectures, to diagnose different types of cardiac arrhythmias in an end to end manner from raw ECG signals. In this study the proposed model consists of 11 layers, which are trained using the 85% of the data and is evaluated using the other 15% of the segmented data. The architecture of the proposed model as illustrated in Figure 1 is as follows: three tandem convolutional blocks that each one contains a 1-dimensional convolutional layer for extracting high-level features and an activation function, called Rectified Linear Unit (ReLu) for achieving non-linear capabilities. The output of Relu layers is then given to the maxpooling layer for reducing feature dimensions and selecting the most significant features. The output of the last convolutional block is then fed to a LSTM layer. LSTM consists of memory blocks and has a recursive feedback connection that can handle long-term dependencies and exploding gradient problems. In order to prevent the proposed deep neural network model from overfitting, dropout layers are used. This layer by randomly dropping out a fraction of nodes prevents the overfitting problem. Fully connected layers usually form the last few layers of the network configuration. In this model, several dense layers were used for changing the vector's dimensions and altering the few-dimensional features to linear vectors. As the last layer multi-class activation function, softmax, was used for converting the output vector of classes to probabilities for each class. Since the target variable is categorical, onehot encoding process were used during the multi-classification procedure. Table II gives the detailed description of the CNN model joined by a LSTM layer implemented for this study.
IV. RESULTS AND DISCUSSION
The purpose of this research is to classify 8 arrhythmias and normal rhythm. Therefore after data preparation, the dataset was divided into two groups of training and testing. Experiments confirm that the more the size of training data, the better its functionality on test data, so 85% of the prepared data has been randomly chosen and used for training the proposed model and 15% of the data has been used for testing and validating the model. The model was Categorical cross-entropy was used as the loss function and Adam optimizer was chosen because it makes the algorithm convergence faster, its implementation is straightforward, it is computationally efficient and its default hyper-parameters can be used with little tunning. Weights and biases are updated continuously using a conjugate gradient back-propagation algorithm which utilizes the generated value by the loss function until the desired optimized values are acquired.
A. Performance evaluation metrics
There are several different metrics to assess the performance of a multi-class classification model. Three of which are used in this study are accuracy (Acc), sensitivity (Se), and specificity (Sp). As shown in (1) accuracy, indicates how accurate the model has performed Sensitivity is the metric that evaluates a models ability to predict true positives of each class.
Specificity is the metric that evaluates a models ability to predict true negatives of each class. Above, true positive (TP) means segments in which the true label is positive and whose class is correctly predicted to be positive whereas false positive (FP) means segments which the true label is negative and whose class is incorrectly predicted to be positive. True negative (TN) shows segments which the true label is negative and whose class is correctly predicted to be negative whilst false negative (FN) shows segments which the true label is positive and whose class is incorrectly predicted to be negative. Figure 2 gives the confusion matrix that is used to visualize the performance of the classification algorithm. Numbers on the main diameter of this matrix indicate true positives.
B. Experimental results and discussion
Sensitivity and specificity parameters which are independent of the number of segments, are used to evaluate the proposed technique. As shown in Figure 3 the ability of the proposed model to correctly identify other rhythms when a particular rhythm is considered (specificity) is above 90% for all 8 Figure 2 unveiled that, this poor identification of the desired arrhythmias entirely appear very sensible. In many cases, the lack of context, limited signal duration, or having a single lead limited the derivation that could be concluded from the data, making it challenging to certainly reveal whether the annotating cardiologists and/or the algorithm was correct. Accuracy indicates how accurate the model has performed.
Fig. 3. Evaluation results of the proposed technique
The average classification accuracy achieved by the proposed model was 98.24%. The novel 1D-CNN+LSTM model implemented in this study exhibited a high-performance accuracy for the classification of different arrhythmia types. Its im-plementation is straightforward and has lower computational complexity in comparison with most of the state-of-the-art approaches such as SVM classifiers-based strategies, random forest algorithm [14] or deployment of ensemble classifiers alongside SVM-based methods. Besides, the proposed model consists of a relatively low number of layers contrary to the models that were executed in [11], [15], [17], [18]. Most previous researches used only one database of ECG signals [3], [16] but a combination of two databases with different sampling frequencies was used for training and testing procedures. Furthermore, only a limited number of arrhythmia disorders were classified in most of earlier studies such as [13], [16] but 9 different cardiac rhythm types were distinguished in this system.The performance comparison with other recent research studies about the same problem is given in Table III. ROC curve is an evaluation metric that gives a graphical illustration of a classifier diagnostic capability. It is actually generated by plotting true positive rate (TPR) also known as sensitivity against false positive rate (FPR) also known as (1specificity) at different threshold settings. The Area Under the Curve (AUC) is the measure that shows how well the classifier has discriminated between classes. Hence, the closest the AUC is to 1, the better the performance of the model. As it is shown in Figure 4, the model reached nearly perfect AUC in distinguishing between each arrhythmia class. The high computational complexity of CNNs presents a critical challenge towards their broader adoption in real-time and power-efficient scenarios. In the following, in order to prove that our model is lightweight and could be used in holter monitor devices, we will show the model size and inference time to classify a single rhythm. The inference time of the proposed model on raspberry pi is only 5.127 ms, meaning that it takes 5.127 ms to classify one rhythm. The size of model being loaded on the Raspberry pi(processor being used in holter monitors) is 0.16 MB .This implies that our model is competitive on a Raspberry pi-based holter monitor devices.
V. CONCLUSION AND FUTURE WORK
Correct detection of cardiac arrhythmias is crucial for early treatment of the patients and computer-aided diagnosis can play an important role. In this paper, the experiment has been conducted on ECG recordings obtained from the two databases MIT-BIH and Long-term AF. The proposed CNN+LSTM model is able to perform the classification of 8 different types of arrhythmias and normal ECG signals. The model can extract discriminant features and information of the heart signals through CNNs and temporal features through a LSTM layer. The experiment reached an average testing accuracy of 98.24% with the inference time of 5.127 ms for a single unseen rhythm. In addition to using the shape of the heartbeat to detect arrhythmia, utilising the other changes in the property of the signal for training the model could be beneficial to achieving better performance. Some of these attributes are RR intervals and QRS intervals (period). For this model, convolution neural networks were followed by an LSTM layer to gain better accuracy in arrhythmia detection. However, the classification accuracy of some of the arrhythmias still needs improving. Deploying some rule-based algorithms alongside this model can lead to better performance. Most of the previous research has been conducted on only one database (mostly MIT-BIH) but this research was based on the combination of two databases. However, the combination of more databases is required for more reliable results. The ECG signals used in this study for training and testing the model were obtained by two leads (usually MLII and V1 leads), while in clinical applications 12-lead ECGs are known as standard. An ideal model should be able to discriminate against the standard ECG signals. Although the model was able to classify 8 types of cardiac arrhythmias as well as the normal sinus rhythm with high accuracy, which is more than what some of the related works have been capable of, there still are various important undetected arrhythmias and heart disorders. The implemented model should be improved to be able to distinguish between more arrhythmias. | 2022-09-05T06:44:00.660Z | 2022-08-29T00:00:00.000 | {
"year": 2022,
"sha1": "4f45dab6e3bb5f982ad7b6f36fa36f45e2533ef8",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "4f45dab6e3bb5f982ad7b6f36fa36f45e2533ef8",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Engineering",
"Computer Science"
]
} |
231585153 | pes2o/s2orc | v3-fos-license | Spatial integration of transcription and splicing in a dedicated compartment sustains monogenic antigen expression in African trypanosomes
Highly selective gene expression is a key requirement for antigenic variation in several pathogens, allowing evasion of host immune responses and maintenance of persistent infections 1. African trypanosomes, parasites that cause lethal diseases in humans and livestock, employ an antigenic variation mechanism that involves monogenic antigen expression from a pool of >2600 antigen-coding genes 2. In other eukaryotes, the expression of individual genes can be enhanced by mechanisms involving the juxtaposition of otherwise distal chromosomal loci in the three-dimensional nuclear space 3–5. However, trypanosomes lack classical enhancer sequences or regulated transcription initiation 6,7. In this context, it has remained unclear how genome architecture contributes to monogenic transcription elongation and transcript processing. Here, we show that the single expressed antigen coding gene displays a specific inter-chromosomal interaction with a major mRNA splicing locus. Chromosome conformation capture (Hi-C) revealed a dynamic reconfiguration of this inter-chromosomal interaction upon activation of another antigen. Super-resolution microscopy showed the interaction to be heritable and splicing dependent. We find a specific association of the two genomic loci with the antigen exclusion complex, whereby VEX1 occupied the splicing locus and VEX2 the antigen coding locus. Following VEX2 depletion, loss of monogenic antigen expression was accompanied by increased interactions between previously silent antigen genes and the splicing locus. Our results reveal a mechanism to ensure monogenic expression, where antigen transcription and mRNA splicing occur in a specific nuclear compartment. These findings suggest a new means of post-transcriptional gene regulation.
Highly selective gene expression is a key requirement for antigenic variation in several pathogens, allowing evasion of host immune responses and maintenance of persistent infections 1 . African trypanosomes-parasites that cause lethal diseases in humans and livestock-employ an antigenic variation mechanism that involves monogenic antigen expression from a pool of >2,600 antigen-coding genes 2 . In other eukaryotes, the expression of individual genes can be enhanced by mechanisms involving the juxtaposition of otherwise distal chromosomal loci in the three-dimensional nuclear space [3][4][5] . However, trypanosomes lack classical enhancer sequences or regulated transcription initiation 6,7 . In this context, it has remained unclear how genome architecture contributes to monogenic transcription elongation and transcript processing. Here, we show that the single expressed antigen-coding gene displays a specific inter-chromosomal interaction with a major messenger RNA splicing locus. Chromosome conformation capture (Hi-C) revealed a dynamic reconfiguration of this inter-chromosomal interaction upon activation of another antigen. Super-resolution microscopy showed the interaction to be heritable and splicing dependent. We found a specific association of the two genomic loci with the antigen exclusion complex, whereby VSG exclusion 1 (VEX1) occupied the splicing locus and VEX2 occupied the antigen-coding locus. Following VEX2 depletion, loss of monogenic antigen expression was accompanied by increased interactions between previously silent antigen genes and the splicing locus. Our results reveal a mechanism to ensure monogenic expression, where antigen transcription and messenger RNA splicing occur in a specific nuclear compartment. These findings suggest a new means of post-transcriptional gene regulation.
Monogenic expression-the expression of a single gene from a large gene family-is essential for several important biological processes. One of the most striking examples of such regulation is the expression of a single odorant receptor from more than 1,400 genes in mammalian olfactory sensory neurons 3 . Likewise, monogenic expression is a key feature of antigenic variation, an immune evasion strategy used by pathogens such as Plasmodium falciparum and Trypanosoma brucei. Antigenic variation refers to the capacity of an infecting organism to systematically alter the identity of proteins displayed to the host immune system 1 . How pathogens ensure the exclusive expression of only one antigen from a large pool of antigen-coding genes remains one of the most intriguing questions in infection biology. Through a combination of chromosome conformation capture and imaging techniques, here, we investigate the role of genome architecture and show that the spatial integration of transcription and messenger RNA (mRNA) splicing in a dedicated sub-nuclear compartment underpins monogenic antigen expression in trypanosomes.
In T. brucei-a unicellular parasite responsible for lethal and debilitating diseases in humans and animals-10 million copies of a single variant surface glycoprotein (VSG) isoform are exposed on the surface of the parasite. The exclusive expression of only one VSG gene per cell and the periodic switching of the expressed VSG gene allow the parasite to evade the host immune system and to maintain persistent infections 2,8 . While the T. brucei genome encodes >2,600 VSG isoforms, in the bloodstream of the mammalian host, a VSG gene can only be transcribed when located in one of ~15 VSG expression sites. Those bloodstream expression sites are polycistronic transcription units located adjacent to telomeres on different chromosomes. Each bloodstream expression site contains an RNA polymerase I (Pol I) promoter, followed by several expression site-associated genes and a single VSG gene 7 .
Notably, Pol I transcription initiates at all VSG expression site promoters, but transcription elongation and transcript processing are highly selective and limited to just one expression site at a time 9,10 . As a result, the single active VSG gene is expressed as the most abundant mRNA and protein in the cell; 5-10% of the total in each case. Why transcription is aborted at all but one expression site is not known. In trypanosomes, mRNA maturation involves trans-splicing, a process that adds a common spliced leader sequence to each pre-mRNA and is coupled to polyadenylation 11 . In addition, the proximity of individual genes to nuclear condensates composed of splicing factors has recently been proposed to play a role in gene expression regulation in mammals 12,13 . Thus, regulated access to RNA maturation compartments may represent an evolutionarily conserved strategy for gene expression control.
One mechanism to ensure monogenic expression is the juxtaposition of otherwise distal chromosomal loci in the three-dimensional (3D) nuclear space. In particular, specific interactions between promoter and enhancer sequence elements can ensure the selective regulation of individual genes. Although classic enhancer structures appear to be absent in many unicellular eukaryotes such as trypanosomes, several observations suggest that a specific genome organization is required for monogenic VSG expression. The single active VSG gene is transcribed in an extranucleolar Pol I compartment known as the expression site body 14 . In those very rare cases (<10 −8 ) where two VSG genes are simultaneously active, both co-localize at the expression site body 15,16 . In addition, the transcribed chromosome core regions and the subtelomeric regions coding for the large reservoir of silent VSG genes appear to fold into structurally distinct compartments 7 , similar to the active A and silent B compartments described in mammalian cells 5 . While the nature of the expression site body has remained enigmatic, a protein complex specifically associated with the active VSG gene was identified recently. VSG exclusion 1 (VEX1) emerged from a genetic screen for allelic exclusion regulators 17 while VEX2 was affinity purified in association with VEX1 (ref. 18 ). The bipartite VEX protein complex maintains mutually exclusive VSG expression 18 but it remains unclear how these proteins exert their function. In this study, we aimed to identify the mechanism that connects RNA maturation, genome architecture and the VEX complex to ensure monogenic antigen expression.
Spatial integration of transcription and splicing in a dedicated compartment sustains monogenic antigen expression in African trypanosomes
Given the well-characterized role of promoter-enhancer interactions in the selective regulation of genes, we set out to identify specific DNA-DNA interactions with a regulatory role in monogenic VSG expression. To this end, we used a T. brucei culture homogeneously expressing a single VSG gene for chromosome conformation capture (Hi-C) analysis. In addition, we employed the mHi-C analysis pipeline, which allowed us to retain many multi-mapping reads and greatly increased the read coverage across repetitive regions of the genome 19 .
To visualize specific interaction patterns of loci of interest (viewpoints) in the Hi-C dataset, we applied a virtual 4C analysis pipeline to extract genome-wide interaction profiles for chosen viewpoints. To identify VSG gene-specific interaction patterns, we chose the active and several inactive VSG genes located in expression sites as viewpoints and plotted the extracted virtual 4C interaction data onto the genome. As expected, we observed a distance-dependent decay of intra-chromosomal interactions between each viewpoint and its upstream and downstream genomic region ( Fig. 1a and Extended Data Fig. 1a). For the VSG-2 gene located on chromosome 6 in expression site 1 (Fig. 1a, top panel), the distance-dependent decay was not as characteristic as for other viewpoints. As we have published previously 7 , this expression site is separated from the core region of chromosome 6 by a centromere, which can serve as a boundary element, inhibiting frequent interactions between the two chromosomal arms.
Strikingly, we found the active VSG-2 gene located on chromosome 6 in expression site 1 to very frequently interact with a distinct locus on chromosome 9 (Fig. 1a,b). Levels of interaction frequency were higher than intra-chromosomal interactions of VSG-2 with its genomic location on chromosome 6, pointing to a strong and stable inter-chromosomal interaction. The locus on chromosome 9 interacting with the active VSG gene is the spliced leader RNA (SL-RNA) array, a genomic locus essential for RNA maturation. This locus contains a cluster of ~150-200 tandemly repeated genes encoding the SL-RNA. SL-RNA is an RNA Pol II-transcribed non-coding RNA that is trans-spliced to the 5′ end of all trypanosome mRNAs, conferring the 5′ cap structure required for RNA maturation, export and translation 11 . Conversely, just like arbitrarily chosen control regions, VSG genes residing in inactive expression sites interacted less frequently or at background levels with the SL-RNA locus (Fig. 1a,b and Extended Data Fig. 1a). In agreement with these observations, when we chose the SL-RNA locus as the viewpoint, we found it to interact more frequently with the active VSG expression site than with any inactive VSG expression site (Extended Data Fig. 1b). Looking for further genomic loci that made inter-chromosomal interactions with the VSG-2 gene of at least 20% of the VSG-2-SL-RNA interaction frequency, we identified a second locus: the centromere of chromosome 11 interacted with VSG-2 at 37% of the frequency observed for the VSG-2-SL-RNA interaction (Fig. 1a, top panel). Since the VSG-2 gene is located next to the centromere of chromosome 6, we suspect these interactions to be a consequence of previously observed centromere-centromere interactions, and not to be related to the active expression of VSG-2. Thus, the Hi-C analysis revealed a strong and selective interaction between the Pol I-transcribed active VSG gene and the Pol II-transcribed SL-RNA locus located on a different chromosome.
To visualize the spatial proximity between the active VSG gene and the SL-RNA locus at the level of individual cells and with an independent assay, we performed super-resolution immunofluorescence microscopy assays (IFAs). The site of VSG transcription is characterized by an extranucleolar accumulation of RNA Pol I 14 .
The site of Pol II-transcribed SL-RNA is marked by an accumulation of the small nuclear RNA-activating protein complex (tSNAPc), an RNA Pol II promoter-binding transcription factor 20 . Both the VSG transcription and the SL-RNA transcription compartments appear to have diameters of ~300 nm, within a T. brucei nucleus with a diameter of ~2 μm. Super-resolution microscopy revealed one VSG transcription compartment (reflecting the hemizygous active subtelomeric VSG-2) and two separate SL-RNA transcription compartments (reflecting the diploid SL-RNA arrays located in the core of chromosome 9) ( Fig. 1c and Extended Data Fig. 2). By scoring nuclei for overlapping, adjacent or separate VSG and SL-RNA transcription compartments, we found that one of the SL-RNA transcription compartments was adjacent to the VSG transcription compartment in the majority of cells (Fig. 1c). The SL-RNA transcription compartments were always extranucleolar (Extended Data Fig. 2d), but those adjacent to the VSG transcription compartment were significantly less intense and significantly closer to the nucleolus (Extended Data Fig. 2e,f). Furthermore, during DNA replication in the S phase, the VSG and SL-RNA transcription compartments were detected in separate locations in >50% of nuclei. Therefore, throughout this study, IFAs were subsequently performed in G1 cells, unless otherwise indicated. Notably, VSG and SL-RNA transcription compartments were once again adjacent in most G2 nuclei ( Fig. 1c and Extended Data Fig. 2b,c), indicating that the interaction is resolved during the S phase and successfully re-established after replication. Taken together, IFAs supported the findings made by Hi-C, suggesting that the Pol I VSG transcription compartment interacts with one of the Pol II SL-RNA transcription compartments.
To determine whether the interaction with the SL-RNA transcription compartment is specific for the active VSG gene, and therefore changes following a VSG switching event, we performed Hi-C experiments using an isogenic T. brucei cell line expressing a different VSG isoform, VSG-13 (Fig. 2a) 21 . VSG-13 resides within expression site 17, which is located on one of five intermediate-sized chromosomes. The presence of co-transcribed resistance markers upstream of VSG-2 in expression site 1 and VSG-13 in expression site 17 allowed us to specifically select for parasites expressing VSG-13 through drug selection (Fig. 2a). The exclusive activity of expression site 1 or 17 was verified by RNA sequencing (RNA-seq) (Fig. 2a).
Hi-C analysis revealed that VSG-2-SL-RNA interactions dropped 20-fold to average inter-chromosomal interaction levels in parasites expressing VSG-13, while interactions between the newly activated VSG-13 and the SL-RNA locus increased 36-fold (Fig. 2b, left and middle panels). As suspected, VSG-2-centromere (chromosome 11) interactions remained unchanged after inactivation of VSG-2 (Extended Data Fig. 3a), suggesting that this is indeed a consequence of centromere-centromere interaction. Furthermore, we found that upon activation of each expression site, the bin harbouring the respective active VSG gene displayed the strongest interaction NATure MicroBioLoGy | VOL 6 | MARCh 2021 | 289-300 | www.nature.com/naturemicrobiology 290 with the SL-RNA locus, suggesting that the VSG gene itself, not its promoter, interacts with the splicing locus (Fig. 2c). In addition, we detected decreased interaction of the inactivated VSG-2 gene with the transcribed chromosome cores, while the activated VSG-13 gene displayed increased interaction with chromosome cores (Extended Data Fig. 3b). This observation indicates that activation of a VSG . c, Immunofluorescence-based colocalization studies of tSNAP myc (SL-RNA locus marker-SL-RNA transcription compartment) and a nucleolar and active VSG transcription compartment marker (Pol I; largest subunit) using super-resolution microscopy. The stacked bar graph depicts the proportions of G1, S phase and G2 nuclei with overlapping, adjacent or separate signals for the SL-RNA and VSG transcription compartments (these categories were defined by thresholded Pearson's correlation coefficients; see Methods). A two-tailed paired Student's t-test was used to compare S or G2 nuclei versus G1 nuclei, and the statistical significance is highlighted where applicable (**P < 0.01). Values are averages of three independent experiments and representative of two independent biological replicates (≥100 nuclei). Error bars represent s.d. Detailed n and P values are provided in the source data. DNA was counter-stained with DAPI. The images correspond to maximum 3D projections of stacks of 0.1-μm slices. Scale bars, 2 μm. N, nucleus; K, kinetoplast (mitochondrial genome). The representative histogram depicts the distribution of signal intensity across the distance indicated by the cyan line.
gene is intimately linked to a transition from a silent to an actively transcribed compartment within the nucleus. Conversely, VSG gene inactivation results in a transition from an active to an inactive nuclear compartment.
To further explore the relationship between SL-RNA interaction frequency and gene expression, we performed Hi-C analyses using insect-stage parasites that do not express any VSGs, but instead express a different group of surface antigens called procyclin genes. Confirming the importance of the SL-RNA interaction, the GPEET (rich in Gly-Pro-Glu-Glu-Thr repeats) and EP1 (rich in Glu-Pro repeats) procyclin genes displayed an increased interaction frequency with the SL-RNA locus upon activation in insect-stage cells ( Fig. 2b (right panel) and Extended Data Fig. 3c). Like VSG genes, procyclin genes are transcribed by RNA Pol I at high levels a,b, Immunofluorescence-based colocalization studies of VEX1 myc /tSNAP GFP and myc VEX2/Pol I. tSNAP and Pol I were used as markers for the SL-RNA and VSG transcription compartments, respectively. The stacked bar graphs depict the proportions of nuclei with overlapping, adjacent or separate signals (these categories were defined by the thresholded Pearson's correlation coefficient; see Methods). The values are averages of four (a) or two (b) independent experiments (≥100 G1 or S phase nuclei). c, VEX1 myc chromatin immunoprecipitation followed by next-generation sequencing (ChIP-seq) analysis. Top: log 2 [fold changes] of ChIP signal versus input sample across the SL-RNA locus (bin size = 300 bp). Middle: magnification of three SL repeats (the arrows represent promoters and the magenta boxes represent SL-RNA; bin size = 10 bp). Bottom: log 2 [fold enrichment] across the active VSG gene (VSG-2 (green box); bin size = 10 bp). kbp, kilobase pairs. d-f, Immunofluorescence analyses of VEX1 myc (d), myc VEX2 (e) and tSNAP myc (f) before and after sinefungin treatment (2 μg ml −1 for 30 min at 37 °C). Cells displaying no detectable signal (<10%) were excluded. The values are averages of two independent experiments (≥200 nuclei each). g, Immunofluorescence-based colocalization studies of the SL-RNA transcription (tSNAP myc ) and VSG transcription compartments (Pol I, large subunit) following treatment with sinefungin. The stacked bar graph depicts the proportions of G1 nuclei with overlapping, adjacent or separate signals. The values are averages of two independent experiments and two biological replicates (≥100 G1 nuclei). The studies in a, b and d-g were undertaken using super-resolution microscopy and the images correspond to maximum 3D projections of stacks of 0.1-μm slices. DNA was counter-stained with DAPI. Scale bars, 2 μm. In a, b and d-g, a two-tailed paired Student's t-test was used to compare G1 versus S nuclei and non-treated versus treated nuclei, respectively, for each category. Statistical significance is highlighted where applicable (**P < 0.01; ***P < 0.001). The experiments in a, b and d-g are representative of at least two independent biological replicates, and error bars represent s.d. Detailed n and P values are provided in the source data. and require efficient trans-splicing for mRNA maturation. Thus, Hi-C analyses of T. brucei cell lines expressing different antigens indicated that interactions with the SL-RNA locus are dynamic and specific for actively transcribed antigen-coding genes. Previously, we had shown that the bipartite VEX complex is associated with the actively transcribed VSG gene and maintains monogenic VSG expression, but that, by microscopy, VEX1 and VEX2 signals only partially overlap each other 18 . Given a similar juxtaposition of the VSG transcription and SL-RNA transcription compartments, we sought to investigate the relationship between the VEX complex and these transcription compartments in more detail. Using optimized immunofluorescence staining protocols and super-resolution microscopy, we were able to detect two VEX1 foci in the majority of G1 nuclei (55 ± 4%); one VEX1 focus was detected in the remainder. These VEX1 signals specifically co-localized with the SL-RNA transcription compartments (Fig. 3a). In contrast, the majority of G1 cells (97 ± 1%) only had one VEX2 focus, which specifically co-localized with the VSG transcription compartment (Fig. 3b). As expected, one VEX1 focus was adjacent to the VSG transcription compartment (Extended Data Fig. 4a) while the VEX2 focus was adjacent to one of the two SL-RNA transcription compartments (Extended Data Fig. 4b). Thus, our IFAs revealed association of VEX1 with the SL-RNA transcription compartments and VEX2 with the VSG transcription compartment. Moreover, the VEX1 signal intensity was significantly higher at the focus adjacent to the VSG transcription compartment (Extended Data Fig. 4a), indicating that all nuclei may have two VEX1 foci, but the second focus may be below the detection limit in some cells. Furthermore, to verify a specific interaction between VEX1 and the SL-RNA locus, we reanalysed published VEX1 data from chromatin immunoprecipitation followed by sequencing (ChIP-seq), previously only mapped to VSG expression sites 18 . The ChIP-seq data revealed a striking enrichment of VEX1 at the SL-RNA locus that was greater than at any other gene, including the active VSG gene (Fig. 3c, Extended Data Fig. 4c and Supplementary Data 1 (sheet 1)). These data suggest that the VEX complex is specifically associated with the VSG and SL-RNA transcription compartments.
While VSG and SL-RNA transcription compartments separate during the S phase ( Fig. 1c and Extended Data Fig. 2a-c), VEX1 does not separate from the SL-RNA transcription compartment and VEX2 does not separate from the VSG transcription compartment ( Fig. 3a,b). Also, consistent with the loss of VSG expression in insect-stage cells, the extranucleolar Pol I focus is lost, while SL-RNA transcription compartments can still be identified through tSNAP localization; the VEX proteins also redistribute, but a pool of VEX1 remains detectable at SL-RNA transcription compartments (Extended Data Fig. 4d). These results indicate that VEX2 marks the VSG transcription compartment in a developmental stage-specific manner, which in bloodstream-stage cells may facilitate the re-establishment of compartment connectivity to propagate the expression of a specific antigen. Consistent with this idea, VEX complex reassembly after DNA replication is dependent on chromatin assembly factor-1 histone chaperone function 18 .
Given the close spatial proximity between the site of VSG transcription and the site of SL-RNA transcription, we next questioned whether the splicing process itself impacts the connection between these compartments ( Fig. 3d-g and Extended Data Fig. 5). Sinefungin inhibits the cap guanylyltransferase-methyltransferase Cgm1 (ref. 22 ) and subsequent methylation of the SL-RNA cap, required for trans-splicing 23,24 . We found that inhibition of trans-splicing with sinefungin 23 disrupted both VEX1 (Fig. 3d) and VEX2 (Fig. 3e) localization within 30 min, while the tSNAP transcription factor was not affected under the same conditions ( Fig. 3f). Notably, inhibition of splicing by sinefungin also disrupted the connection between the VSG and SL-RNA transcription compartments, as revealed by separation of the Pol I and tSNAP signals (Fig. 3g); neither the VEX nor the tSNAP protein levels were substantially affected by sinefungin treatment (Extended Data Fig. 5a). The delocalization of the VEX proteins following splicing inhibition suggests that the process of RNA maturation or mature RNAs from the active VSG expression site play a role in the assembly and/ or maintenance of these protein condensates. Consistent with this idea, inhibition of Pol I or both Pol I and Pol II transcription with BMH-21 or actinomycin D, respectively, also similarly disrupted VEX1 and VEX2 localization without affecting tSNAP localization (Extended Data Fig. 5b,c). Thus, VEX protein localization is dependent on Pol I transcription or VSG expression site RNAs and mRNA splicing activity.
Next, we aimed to investigate the mechanism by which the VEX complex ensures monogenic VSG expression. Previously, we found
Fig. 4 |
The exclusive association between the active VSG gene and the SL-locus is VeX2 dependent. a,b, Immunofluorescence-and super-resolution microscopy-based colocalization studies of tSNAP myc (the SL-RNA transcription compartment) and Pol I (the nucleolus and extranucleolar reservoir) following VEX1, VEX2 and VEX1/VEX2 knockdown. a, Representative images are depicted. Right: two representative histograms depict the distribution of signal intensity across the distance indicated by the cyan lines. b, The violin plot depicts the 'inner' distance between the Pol I extranucleolar reservoir and the nearest SL-RNA transcription compartment (≥ 81 G1 nuclei) following VEX1, VEX2 and VEX1/VEX2 knockdown. White circles show the medians; box limits indicate the 25th and 75th percentiles as determined by R software; whiskers extend 1.5× the interquartile range from the 25th and 75th percentiles; and polygons represent density estimates of data and extend to extreme values. c,d, DNA FISh and super-resolution microscopy-based colocalization studies of the SL-RNA transcription compartments (probe: digoxigenin-labelled SL repeats) and VSG expression sites (probe: biotin-labelled 50-bp repeats) following VEX2 knockdown. c, Representative images are depicted. d, The bar graph depicts the percentage of G1 nuclei with VSG expression site clusters (size > 0.2 μm 3 ) overlapping or adjacent to the SL arrays (within 60 nm) before and after VEX2 knockdown; error bars represent s.d. The data are average of two biological replicates and two independent experiments. a,c, All nuclei are G1 and the images correspond to maximal 3D projections of stacks of 0.1-μm slices. DNA was counter-stained with DAPI; scale bars, 2 μm; images are representative of two biological replicates and two independent experiments. Statistical analysis was undertaken using a two-tailed unpaired (b) or paired (d) Student's t-test. Detailed n and P values are provided in the source data. e, hi-C (virtual 4C) analyses between the SL-RNA locus (chromosome 9) and different expression sites. Relative interaction frequencies between the viewpoint and the VSG expression sites are shown before and after VEX2 knockdown. Each dot represents the average value for one expression site. Bin size = 20 kb. f, Virtual 4C analyses between the SL-RNA locus (chromosome 9) as viewpoint and the different expression sites before and after VEX2 knockdown. Relative interaction frequencies between the viewpoint and VSG expression sites are shown. Ticks on the x axes mark the bins used for the virtual 4C analyses. g, Virtual 4C analyses between the EP1 (chromosome 10) or GPEET gene array (chromosome 6) as viewpoint and the SL-RNA. Relative interaction frequencies between the viewpoint and the SL-RNA locus are plotted. Bin size = 20 kb. The analyses in e-g are based on hi-C experiments with cells before and 24 h after VEX2 knockdown (the average of three biological replicates is shown). The coordinates of all viewpoints used for virtual 4C analyses are listed in Supplementary Data 1 (sheet 2). h, Schematic for monogenic VSG expression. A strong inter-chromosomal interaction between the SL array and the active VSG gene facilitates spatial integration of transcription and mRNA maturation. VEX1 and VEX2 are primarily spliced leader and active VSG associated, respectively, and sustain monogenic VSG expression by excluding other VSGs. The VSG-SL organelle is reconfigured upon activation of a different VSG. that VEX2 depletion leads to a strong activation of VSG genes located in previously silent expression sites 18 . Thus, following our observation that the VEX complex spans the VSG transcription and the SL-RNA transcription compartments, we sought to determine whether VEX2 functions as a connector or as an exclusion factor (that is, whether, following VEX2 depletion, the active VSG gene loses connectivity to the SL-RNA transcription compartment or whether previously inactive expression sites start to interact with the SL-RNA transcription compartment). To test these models, we analysed the distance between the extranucleolar Pol I focus (expression site body) and SL-RNA transcription compartment following depletion of VEX complex components (Extended Data Fig. 6a,b). IFA data revealed a dissociation of Pol I from the SL-RNA transcription compartments in 45% of G1 nuclei following 12 h of VEX2 knockdown (P < 0.001) and 60% following VEX1/VEX2 double knockdown (P < 0.01) (Fig. 4a,b and Extended Data Fig. 6c,d). Association between the extranuclear Pol I focus and the SL-RNA transcription compartment was not significantly disrupted following VEX1 knockdown. Therefore, in the absence of VEX2, Pol I initially separates from the SL-RNA compartment (12 h) and, at later time points, it disperses 18 , indicating that VEX2 sustains a local reservoir of Pol I at the active VSG gene. Pol I dispersion temporally overlaps with the activation of previously silent VSG expression sites.
Next, we performed DNA fluorescence in situ hybridization (FISH) to visualize all VSG expression sites in the 3D nuclear space and their position relative to the SL-RNA compartments before and after VEX2 depletion (Fig. 4c,d and Extended Data Fig. 7). We used one probe that recognized the 50 base pair (bp) repeats that were present immediately upstream of all VSG expression sites, and another probe that recognized the SL-RNA repeats. In unperturbed cells, expression sites were distributed throughout the nucleus (Fig. 4c), as described previously 15,16 . Following VEX2 depletion (24 h), the number of individual VSG expression site signals decreased while their size increased (Extended Data Fig. 7a), consistent with the formation of VSG expression site clusters. Indeed, the proportion of G1 nuclei displaying VSG expression site clusters overlapping or adjacent to an SL-RNA compartment increased upon VEX2 knockdown (Fig. 4d). We also observed a decrease in the distance between spliced leader arrays (Extended Data Fig. 7b-d). Therefore, in the absence of VEX2, previously silent VSG expression sites displayed increased spatial proximity to SL-RNA compartments and were derepressed.
To further explore the role of VEX2 in controlling interactions between antigen-coding genes and SL-RNA loci, we performed Hi-C analyses in VEX2-depleted cells. After 24 h of VEX2 depletion, all previously silent expression sites displayed increased interaction frequencies with the SL-RNA locus (Fig. 4e), consistent with the DNA FISH analysis (Fig. 4c-d). The interaction between VSG expression site 3 and the SL-RNA locus showed the strongest increase. Notably, this is the expression site containing the most derepressed VSG (VSG-6) following VEX2 depletion 18 . Interaction frequencies of the active VSG expression site 1 with the SL-RNA locus remained unchanged, correlating with sustained and dominant VSG-2 expression. Thus, both Hi-C and DNA FISH analyses point to close spatial proximity between the active VSG locus and one of the SL-RNA loci before and after VEX2 depletion. Indeed, overall SL-RNA interactions correlated with VSG transcript levels before and after VEX2 knockdown (Extended Data Fig. 8a).
Besides the VSG genes located in previously silent expression sites, expression site-associated genes were also strongly upregulated following VEX2 knockdown 18 . In line with this finding, we observed the largest increase in SL-RNA interactions for the regions upstream of the VSG gene in each derepressed expression site, where expression site-associated genes are located ( Fig. 4f and Extended Data Fig. 8b).
As a third group of RNA Pol l-transcribed genes, insect-stage-specific procyclin genes are upregulated upon VEX2 depletion 18 . Correlating with these data, following VEX2 knockdown, we found GPEET and EP1 procyclin genes to exhibit strongly increased interaction frequencies with the SL-RNA array and also with VSG expression sites (Fig. 4g and Extended Data Fig. 8c).
Thus, our data suggest that VEX2 has a dual function: specifically enhancing mRNA splicing of the VSG gene that is connected to the SL-RNA transcription compartment and, at the same time, excluding all other VSG expression sites and procyclin genes from the SL-RNA compartment to ensure monogenic VSG expression.
By combining proximity ligation and super-resolution microscopy, we were able to demonstrate spatial integration of the active VSG expression site and a genomic locus important for RNA maturation. Our data show that this supramolecular assembly is composed of a VSG transcription compartment with the active VSG gene, RNA Pol l and VEX2 and an SL-RNA transcription compartment with the SL-RNA array, RNA Pol II, VEX1 and the tSNAP complex, presumably together with other factors important for mRNA trans-splicing 20 . Based on these findings, we propose a model in which VSG choice is intimately associated with an inter-chromosomal interaction, bringing together two nuclear compartments to ensure efficient VSG mRNA processing at only one expression site (Fig. 4h). In the VSG transcription compartment, the VSG gene is transcribed by highly processive RNA Pol I, generating large amounts of VSG pre-mRNA that requires efficient processing to prevent premature degradation. In the SL-RNA transcription compartment, SL-RNA (an essential substrate for maturation of every mRNA) is produced. The close spatial proximity of the two compartments in a single organelle provides a sufficiently high concentration of trans-splicing substrate to ensure the efficient maturation of highly abundant VSG transcripts.
Our data indicate that VEX2 is not simply a tether. Instead, they led us to propose a temporal 'choose and consolidate' component to our model: VEX2 is recruited to expression sites in a stochastic and competitive manner by Pol I transcription (choose), as supported by Pol I inhibition experiments that resulted in dispersion of VEX2 (ref. 18 and Extended Data Fig. 5b). Subsequently, the presence of VEX2 at one expression site consolidates expression site transcription, as supported by VEX2 depletion experiments that resulted in delocalization of the Pol I focus (Fig. 4a,b). Consolidation by VEX2 also involves the formation of a particularly prominent interaction between the active VSG and the SL-RNA locus, as supported by interactions primarily with the regions upstream of derepressed VSGs following VEX2 depletion (Fig. 4f). Increased interaction with the SL-RNA array is accompanied by increased transcription of expression sites, as supported by Hi-C and FISH (Fig. 4c-f), also leading to efficient VSG mRNA processing. We propose that these initial interactions occur in a stochastic manner, providing opportunities to overcome exclusion. The probability of forming and consolidating a new stable interaction with the SL-RNA locus is probably enhanced when the VEX complex is no longer effectively sequestered (for example, when the active VSG expression site is damaged 25 or when chromatid cohesion 26 or telomeric chromatin 27 are disrupted). According to this model, VEX2 is the key molecule excluding all but one VSG expression site from the SL-RNA transcription compartment, thereby ensuring expression of a single VSG gene per cell.
RNA processing as the limiting factor in monogenic VSG expression has been proposed previously, based on the observations that all VSG expression site promoters are active, yet processive polycistronic transcription and mature transcripts are specific to the single active expression site 9,10 . Given the observation that co-transcriptional RNA processing can affect elongation rates in other organisms 28 , it will be interesting to determine whether efficient VSG mRNA processing will exert a positive feedback on transcription elongation along expression sites. Also, it was shown recently that splicing can activate transcription 29 . It remains to be shown whether factors located in the SL-RNA transcription compartment recruit the transcription machinery to the interacting VSG expression site and thereby enhance transcription. Notably, we also find other highly expressed housekeeping genes, such as core histones and tubulin, to associate with the SL-RNA locus (Extended Data Fig. 9a) and with VEX1 (Extended Data Fig. 9b). Thus, interactions with the SL-RNA locus, which is thought to be >250 kilobase pairs in length 30 , may play a broader role for the regulation of gene expression in T. brucei.
While the importance of intra-chromosomal interactions in regulating gene expression has been shown in many complex eukaryotes, the significance of inter-chromosomal interactions has been questioned. Interactions between different chromosomes were thought to require complicated, possibly error-prone mechanisms to be re-established following mitosis. Yet, our data demonstrate a stable interaction between the active VSG gene (located either on chromosome 6 or on an intermediate-sized chromosome) and the SL-RNA array locus on chromosome 9 (an interaction that is also stably propagated during cell division).
Protein condensates have recently emerged as important features that compartmentalize nuclear functions (for example, transcription control by RNA polymerases 31 ). Regulated switching between adjacent transcriptional and splicing condensates has been described in mammalian cells 32 and RNAs are major actors in facilitating genomic interactions and phase transitions 33 . Given the recent finding that members of a family of helicases are global regulators of RNA-containing, phase-separated organelles 34 , it is tempting to speculate that maintenance of VSG transcription-maturation compartment connectivity is similarly regulated by the putative RNA helicase VEX2 (ref. 18 ). We show here that in T. brucei the assembly of two membraneless nuclear condensates, each with a specific function in VSG gene transcription and RNA maturation, and both containing protein and RNA molecules 15,20 , regulates monogenic VSG expression. By shaping a highly selective and specific genome architecture, VEX2 allows only one VSG gene to productively interact with the mRNA splicing compartment.
Methods
No statistical methods were used to predetermine sample size. The experiments were not randomized and the investigators were not blinded to allocation during experiments and outcome assessment.
T. brucei growth and manipulation.
Bloodstream-form T. brucei Lister 427 and 2T1 cells 35 , both wild type with respect to VEX1, VEX2 and the tSNAP complex subunit, SNAP42, were grown in HMI-11 medium and genetically manipulated using electroporation 36 ; cytomix was used for all transfections. Puromycin, phleomycin, hygromycin and blasticidin were used at 2, 2, 2.5 and 10 µg ml −1 for the selection of recombinant clones, and at 1, 1, 1 and 2 µg ml −1 for maintaining those clones, respectively. RNA interference (RNAi) experiments were undertaken through tetracycline induction at 1 µg ml −1 . A double-selection T. brucei cell line was used that derived from the Lister 427 bloodstream-form MITat 1.2 isolate 21 . A neomycin resistance gene in VSG expression site 17 and a puromycin resistance gene in VSG expression site 1 allowed the selection for a homogenous cell population that expressed either VSG-2 from expression site 1 or VSG-13 from expression site 17. Cells were cultivated with either 10 µg ml −1 neomycin (also referred to as N50 cells) or 0.1 µg ml −1 puromycin (referred to as P10 cells). Established procyclic-form T. brucei Lister 427 cells were grown in SDM-79 at 27 °C and genetically manipulated using electroporation, as above. Blasticidin or hygromycin was used at 10 or 50 and 2 or 1 μg ml −1 for selection and maintenance, respectively.
ChIP-seq. ChIP-seq was carried out as described in ref. 18 . Reads were aligned to the 11 curated megabase chromosomes from the TREU927 strain genome sequence 38 and a non-redundant set of bloodstream expression site and metacyclic VSG contigs from the Lister 427 strain 39-41 using Bowtie 2 (ref. 42 ) in very sensitive alignment mode, and alignments were compressed and sorted using SAMtools 43 .
Bowtie 2 attempted to align 54.0 and 49.9 million read pairs with 70.84 and 82.43% success rates, respectively, resulting in 38.3 and 41.1 million aligned read pairs. PCR duplicate reads were removed using Picard MarkDuplicates (http://broadinstitute.github.io/picard/), resulting in 26.8 and 41.3 million aligned read pairs for analysis. Alignments were visually inspected with the Artemis genome browser 44 . Circular plots (Extended Data Fig. 4c) were generated using the R library circlize 45 , and bedGraph files for log 2 [fold change] (Fig. 3c and Extended Data Fig. 9b) were generated using deepTools2 (ref. 46 ). BedGraph files were generated with 1-kilobase (kb), 300-bp or 10-bp bins and the option smoothLength 5,000 (see captions for Fig. 3 and Extended Data Fig. 4c).
SL-RNA sequences were annotated using the s eq ue nces C GT TTC T GG CA CG A C AG TA AA AT AT GG CA AG TG TC TC AA AA CT GC CT GT AC AG CTTATTTTT GGGACACACCCATGCTTTC (promoter) a nd A AC TA AC GC TA TT AT TA GA-AC AG TT TC TG TA CT AT AT TG GT AT GA GAAGCTCCCAGTAGCAGCTGGG CCAACACACGCATTTGTGCTGTTGGTTCCCGCCGCATACTGCGGGAA TCTGGAAGGTGGGGTCGGATGACCTC (transcript). The transcript features were plotted with transcription start site and transcription end site denoting the 5′ and 3′ extremities. Fold enrichment traces covering the spliced leader locus were calculated directly using deepTools bamCompare. Heat maps (Extended Data Figs. 4c and 9b) were generated using deepTools2 bamCompare, computeMatrix and plotHeatmap 46 , and the resulting vector graphics files were then assembled into figures using Adobe Illustrator. Genomic regions for tandem genes and arbitrarily selected genes with a paralogue count of 0 were assembled in bed files using annotated mRNA sequences from TriTrypDB version 5.1 of the TREU927 genome sequence. All scripts necessary to reproduce the ChIP-seq analyses have been deposited, together with the results of those analyses, under https://doi.org/10.5281/zenodo.3628212.
FISH.
For DNA FISH experiments, biotin-and digoxigenin-labelled DNA probes were generated by PCR using standard conditions with OneTaq polymerase (New England Biolabs), with the exception that a 1:2 ratio of biotin-16-dUTP (Roche) or digoxigenin-11-dUTP (Roche) and dTTP were used in the reaction. Repeats of 50 bp and spliced leader repeats were amplified from T. brucei L427 genomic DNA ( Supplementary Data 1 (sheet 3)). A smear of products with various sizes was generated but only fragments of 400 bp or less were gel extracted and purified. DNA probes were co-precipitated with herring sperm DNA (Sigma-Aldrich) at 10 μg ml −1 and yeast transfer RNA (Invitrogen) at 10 μg ml −1 . Probes were then resuspended to a concentration of 1,000 ng ml −1 in hybridization buffer (50% formamide, 10% dextran sulfate and 2× saline-sodium citrate (SSC)). Before hybridization, cells were prepared similarly as for immunofluorescence: trypanosomes were fixed in 3% paraformaldehyde for 15 min at 37 °C, washed three times with PBS and finally resuspended in 1% bovine serum albumin (BSA). The cells were attached to poly-l-lysine-treated slides, then permeabilized with 0.5% Triton X-100 in PBS for 15 min at room temperature, washed three times with PBS and then treated with 1 mg ml −1 RNAse A (Invitrogen) in PBS for 1 h at room temperature. This was followed by a blocking step with 10 μg ml −1 herring sperm DNA and 10 μg ml −1 yeast transfer RNA in hybridization buffer (50% formamide, 10% dextran sulfate and 2× SSC) for 40 min at room temperature. After adding the probe mix to the slides, the samples were sealed with gene frames and denatured on an inverted heat block at 85 °C for 5 min, followed by overnight incubation at 37 °C. After hybridization, the slides were washed with 50% formamide and 2× SSC for 30 min at 37 °C, followed by three 10-min washes in 1× SSC, 2× SSC and 4× SSC at 50 °C. Samples were then incubated with an anti-digoxigenin antibody (Abcam; clone 21H8) diluted 1:10,000 in 1% BSA in PBS for 2 h at room temperature. After washing three times for 10 min in Tris-buffered saline with 0.01% Tween, the slides were incubated for 1 h with a streptavidin-Alexa Fluor 488 conjugate (Invitrogen) and a goat anti-mouse Alexa Fluor 568 antibody (Invitrogen), both diluted to 1:500 in 1% BSA. Samples were washed in Tris-buffered saline with 0.01% Tween, as before, and mounted in Vectashield with DAPI. For all of the experiments involving RNAi, the knockdown was always verified by western blot and the VSG derepression phenotype was confirmed by IFA before FISH analysis.
Imaging and image analysis. For wide-field microscopy, cells were analysed using a Zeiss Axiovert 200M microscope with an AxioCam MRm camera and ZEN Pro software (Carl Zeiss). The images were acquired as z stacks (0.1-0.2 µm) and further deconvolved using the fast iterative algorithm in ZEN Pro.
For super-resolution microscopy, cells were analysed using a Leica TCS SP8 confocal laser scanning microscope in HyVolution mode and the Leica Application Suite X software (Leica) or a Zeiss 880 Airyscan and the Zeiss ZEN software (Carl Zeiss). The Hyvolution mode was used with the following settings: highest resolution/ lowest speed; pinhole 0.5. All representative images obtained by super-resolution microscopy correspond to maximum 3D projections by the brightest intensity of stacks of approximately 30 slices of 0.1 µm (nuclei with DNA in grey).
VEX1, VEX2 and tSNAP foci and Pol I nucleolar and expression-site body (ESB) signals could be detected in over 85-90% of nuclei. Counts in total cells or specific cell cycle phases were performed typically in >200 or >100 nuclei, respectively. All quantifications are averages or representative of at least two biological replicates and independent experiments (see source data for details relating to specific experiments). Pearson's correlation coefficient was applied as a statistical measure of colocalization 50 . Overlapping, adjacent and separate foci presented a Pearson's correlation coefficient in the ranges ≥0.5 to ≤1, ≥−0.5 to <0.5 and ≥−1 to <−0.5, respectively. Regarding the distance measurements between the ESB and tSNAP compartment ( Fig. 4b and Extended Data Fig. 2c, inside edge distance), a control measurement (Extended Data Figs. 2b and 6d, outside edge distance) was performed to ensure that changes in the distance between the two protein condensates following cell cycle progression or VEX RNAi were not a consequence of changes in the diameter of the foci. For the ESB/tSNAP localization following VEX2 or VEX1/VEX2 RNAi (Fig. 4a,b and Extended Data Fig. 6c,d), all of the imaging and analyses were performed at 12 h post-induction, a time point when there was sufficient VEX2 knockdown (Extended Data Fig. 6b) but both nucleolar Pol I and the ESB could be detected in >85% of cells; the ESB is not detectable at later time points 18 . Moreover, the ESB/tSNAP localization analyses following VEX RNAi or sinefungin treatment were restricted to G1 cells to exclude any cell cycle bias, as these protein condensates can separate during the S phase ( Fig. 1c and Extended Data Fig. 2a-c). In approximately 75% of G2 cells, two ESBs could be detected; the n values provided in the figures (Fig. 1c and Extended Data Fig. 2b,c) correspond to ESB/tSNAP pairs (the number of nuclei is stated in the caption). All signal quantifications were performed as follows: corrected fluorescence = integrated density − (selected area × mean fluorescence of background readings). For all of the quantifications, images were acquired with the same settings and equally processed. All of the images were processed and scored using Fiji version 2.0.0 (ref. 51 ), using stacks of approximately 30 slices of 0.1 µm, except Fig. 4d and Extended Data Fig. 7a, where the analysis was performed using Imaris 9.5 (Oxford Instruments). Briefly, 3D composites were loaded into Imaris 9.5; the DAPI channel was used to segment the nuclei; VSG expression site foci were segmented in the 50-bp-repeats channel; and diameter, area, signal intensity and volume were determined for all foci in all nuclei. The foci average number and volume distribution in the parental population versus VEX2 RNAi is depicted in Extended Data Fig. 7a. Foci with a size ≥ 0.2 µm 3 were defined as VSG expression site clusters.
For all of the experiments involving RNAi, the knockdown was always verified by western blot and the VSG derepression phenotype was confirmed by IFA and/ or FACS analysis.
Complementary DNA (cDNA) synthesis, library preparation and sequencing. Synthesis of cDNA was performed using a NEBNext Ultra Directional RNA Library Prep Kit from Illumina (New England Biolabs; E7420) according to the manufacturer's instructions. The concentration of cDNA was measured using a Qubit dsDNA HS Assay Kit (Invitrogen; Q32854) and a Qubit 2.0 Fluorometer (Invitrogen; Q32866). To generate strand-specific RNA-seq libraries, uracil excision and removal of the second strand was performed before the conversion of Y-shaped adapters. Therefore, 3 μl USER Enzyme (New England Biolabs; M5505) was mixed with 16 μl adapter-ligated DNA, 1 μl TruSeq PCR primer cocktail (50 μM) and 20 μl KAPA HiFi HotStart ReadyMix (KAPA Biosystems; KK2601). USER digestion was performed at 37 °C for 15 min, followed by the published amplification protocol. Library concentrations were determined in duplicate using a Qubit dsDNA HS Assay Kit (Invitrogen; Q32854) and a Qubit 2.0 Fluorometer (Invitrogen; Q32866), and were quantified using the KAPA Library Quantification Kit (KAPA Biosystems; KK4824) according to the manufacturer's instructions. Strand-specific RNA-seq libraries were sequenced in paired-end mode on an Illumina NextSeq 500 sequencer (2 × 75 cycles).
Processing of sequencing data. The sequencing datasets were mapped to the TbruceiLister427_2018 genome assembly (release 43; downloaded from TriTrypDB 53 ) using BWA-MEM 54 . The alignments were converted from SAM to BAM format, sorted, indexed and filtered by alignment quality (q > 0) using SAMtools version 1.9 (ref. 43 ). To visualize read coverage, the number of reads was normalized per billion mapped reads and coverage files were generated in the wiggle format using COVERnant version 0.3.1 with the subcommand ratio 55 .
In situ Hi-C. In situ Hi-C was performed as previously described 7 Biolabs; M0210)) and incubation at 23 °C for 4 h. The end-repaired chromatin was transferred to 665 μl ligation mix (1.8% Triton X-100, 0.18 mg BSA and 1.8× T4 DNA Ligase Buffer (Invitrogen; 46300018)) and 5 μl T4 DNA ligase (Invitrogen; 15224025) was added. The ligation was performed for 4 h at 16 °C with interval shaking. Crosslinks were reversed by adding 50 μl 10 mg ml −1 proteinase K (65 °C for 4 h) following the addition of another 50 μl 10 mg ml −1 proteinase K, 80 μl 5 M NaCl and 70 μl 10% SDS (at 65 °C, overnight). DNA was precipitated with ethanol and resuspended in 257 μl TLE (10 mM Tris-HCl and 0.1 mM EDTA (pH 8.0)). SDS was added to a final concentration of 0.1% and the sample was split among two tubes for sonication (Covaris S220 microtubes; 175 W peak incident power; 10% duty factor; 200 cycles per burst; 240-s treatment). The samples were recombined and the volume was adjusted to 300 μl with TLE. Fragments between 100 and 400 bp in size were selected using Agencourt AMPure XP beads (Beckman Coulter), according to the manufacturer's instructions. The DNA fragments were eluted off the beads in 55 μl The PCR reactions were pooled and the beads were removed from the supernatant using a magnet. The library was purified by the addition of 1.5 volumes of Agencourt AMPure XP beads (Beckman Coulter), according to the manufacturer's instructions. The sample was eluted off the beads using 25 μl 1× TLE buffer and transferred to a fresh tube and the concentration was determined using Qubit (Qubit dsDNA HS Assay Kit; Thermo Fisher Scientific) and quantitative PCR (KAPA SYBR FAST qPCR Master Mix; KAPA Biosystems), according to the manufacturer's instructions. Library size distributions were determined on a 5% polyacrylamide gel. Paired-end 75-bp sequencing was carried out using the Illumina NextSeq 500 system with mid-or high-output NextSeq 500/550 kits (version 2.5) according to the manufacturer's instructions.
Mapping of Hi-C reads and generation of interaction matrices.
Reads were mapped to a modified version of the TbruceiLister427_2018 genome assembly (downloaded from TriTrypDB; release 43) containing the following modifications. For all Hi-C experiments, we masked a newly discovered misassembly in bloodstream expression site 2 to avoid incorrect interactions. For Hi-C experiments in 2T1 control 35 and VEX2 knockdown cells, we added the transfected constructs as separate contigs to the genome. The construct sequences, as well as the modified genome, have been deposited together with the results of the analyses under https://doi.org/10.5281/zenodo.3628212. Mapping, filtering, normalization and read counting were performed by the mHi-C pipeline, as described in ref. 19 . We modified the pipeline to be compatible with the T. brucei genome assembly and also incorporated a merging step for the individual replicates after the removal of duplicate reads, but before data normalization (step 4), in order to avoid the introduction of any bias by the merge. We chose iterative correction 56 as the normalization method and finally filtered the mHi-C outcome by the posterior probability of 0.6 (that is, reads were assigned to a bin with a likelihood of at least 60%). Downstream analyses, such as normalizing for the different ploidy within the T. brucei genome assembly, were implemented with in-house scripts. The digestion of the reference genome with the restriction site was implemented using HiC-Pro Utilities 57 . All of the scripts necessary to reproduce the Hi-C analyses can be found at https://github.com/bgbrink/PRJEB35632.
Virtual 4C analysis.
To visualize interactions between one genomic region (viewpoint) and all other genomic sites, relevant bins were extracted from a 20or 50-kb Hi-C matrix. An average interaction value for every genomic bin was calculated if the viewpoint regions spanned more than one bin. The coordinates that define the different viewpoints used in this study are shown in Supplementary Data 1 (sheet 2). To determine the relative interaction frequency of a viewpoint with chromosome cores and subtelomeres, the average interaction frequency of the viewpoint with each chromosome core and subtelomere was calculated based on the relative interaction frequencies extracted by virtual 4C analysis. The ratio between the average interaction frequency (core) and the average interaction frequency (subtelomeres) was calculated for each chromosome and plotted as one dot. The virtual 4C analysis was implemented using HiC Sunt Dracones (https://doi.org/10.5281/zenodo.3570496). All of the scripts necessary to reproduce the Hi-C analyses can be found at https://github.com/bgbrink/PRJEB35632. Statistical analysis. All statistical analysis was performed using GraphPad Prism software (version 7.0). Detailed summaries of the n and P values for all of the analyses performed in this study are provided as source data.
Reporting Summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article.
Data availability
High-throughput sequencing data (Hi-C and RNA-seq) generated for this study have been deposited at GitHub (https://github.com/bgbrink/PRJEB35632) and in the European Nucleotide Archive under primary accession number PRJEB35632, respectively. Previously published ChIP-seq and RNA-seq data that were used for this study are publicly available at the European Nucleotide Archive under accession numbers PRJEB25352 and PRJEB21615, respectively. Processed data and results are available from https://doi.org/10.5281/zenodo.3628212. Source data are provided with this paper.
NATuRE MICRObIOlOGy
Extended Data Fig. 2 | Dynamic association between the active VSG gene and the spliced leader rNA (SL) array transcription compartments. a, Immunofluorescence-based colocalization studies of tSNAP myc (SL-RNA transcription compartment) and a nucleolar and active VSG transcription compartment marker (Pol I, largest subunit) using super resolution microscopy -a plot showing signal intensity across a defined area (cyan line) is presented. b-c, The violin plot shows 'outside edge' (b) or 'inside edge' distance (c) measurements between the SL-RNA and VSG transcription compartments in G1, S phase or G2 nuclei -n values correspond to number of nuclei except in G2 where the n value corresponds to the number of expression-site body (ESB) / tSNAP pairs detected (in a total of 68 and 61 nuclei). d, Immunofluorescence-based colocalization studies of tSNAP myc and a nucleolar compartment marker (NOG1) using super resolution microscopy. e, The plot shows signal intensity measurements of the SL-RNA compartments, adjacent or non-adjacent to the expression-site-body (ESB). f, The plot shows 'inside edge' distance measurements between the SL-RNA compartments, adjacent or non-adjacent to the ESB and the nucleolus. ESB: Pol I extranucleolar reservoir and VSG transcription compartment. a, d, DNA was counter-stained with DAPI; the images correspond to maximal 3D projections of stacks of 0.1 μm slices and are representative of two biological replicates and three independent experiments; scale bars 2 μm. Violin plots (b, c, e, f): white circles show the medians; box limits indicate the 25th and 75th percentiles; whiskers extend 1.5 times the interquartile range from the 25th and 75th percentiles; polygons represent density estimates of data and extend to extreme values. P values were determined using a two-tailed unpaired (b-c) or paired (e-f) Student's t-test. Detailed n and p values are provided in Source Data Extended Data Fig. 2.
NATuRE MICRObIOlOGy
Extended Data Fig. 3 | changes in DNA-DNA interactions following a change in VSG isoform expression. a, Virtual 4C analyses between VSG-2 in expression site 1 as viewpoint and the centromere on the subtelomere of chromosome 11. Relative interaction frequencies between the viewpoint and chr. 11 are plotted. Bin size 20 kb. * marks the centromere on chr. 11 (located on subtelomere 3 A). The coordinates of all viewpoints used for virtual 4C analyses are listed in Supplementary Data 1 (sheet 2). b, hi-C (virtual 4C) analyses between VSG-2, VSG-13 or a subtelomeric control region as viewpoint and chromosomal core or subtelomeric regions ('VSG-X' refers to VSG-2, VSG13 or a subtelomeric control region). Each dot represents the ratio of: the average interaction frequency of the viewpoint with the chromosome core / the average interaction frequency with the subtelomeres. One dot per chromosome is plotted. The black bar marks the median ratio per viewpoint. Bin size 50 kb. c, Virtual 4C analyses between the EP1 gene array (chr. 10) as viewpoint and the SL-RNA locus (chr. 9). Relative interaction frequencies between the EP1 array and the SL-RNA locus are plotted. Bin size 20 kb. Fig. 4 | The VeX complex associates with both the active VSG gene and the Spliced Leader (SL) locus in a cell cycle and developmental stage-dependent manner. a-b, Immunofluorescence-based colocalization studies of VEX1 myc / Pol I and GFP VEX2 / tSNAP myc in bloodstream form cells. tSNAP and Pol I were used as markers for the SL-RNA and VSG transcription compartments, respectively. The stacked bar graphs depict proportions of nuclei with overlapping, adjacent or separate signals (these categories were defined by thresholded Pearson's correlation coefficient -see methods); values are averages of two independent experiments (≥100 nuclei for G1 and S phase cells); error bars, SD. The violin plot (a) shows signal intensity measurements of the VEX1 foci, adjacent or non-adjacent to the expression-site-body (ESB / VSG transcription compartment / Pol I extranucleolar reservoir) -in cells with 2 VEX1 foci and 1 ESB. White circles show the medians; box limits indicate the 25th and 75th percentiles; whiskers extend 1.5 times the interquartile range from the 25th and 75th percentiles; polygons represent density estimates of data and extend to extreme values. P values were determined using a paired Student's t-test; detailed n and p values are provided in Source Data Extended Data Fig. 4. c, VEX1 myc chromatin immunoprecipitation followed by next generation sequencing (ChIP-seq) analysis. The circle plot represents log 2 fold change of ChIP versus Input of non-overlapping 1 kb bins of the 11 megabase chromosomes; outside track shows tandem arrays (red) and the SL-RNA locus (black). An inset zooming on the SL-RNA locus is depicted: heat-map of SL-gene loci. Bin size 300 bp. d, Localization of tSNAP GFP and colocalization studies of VEX1 myc or myc VEX2 and Pol I and VEX1 myc and tSNAP GFP in procyclic forms (insect-stage), using immunofluorescence. Procyclic forms do not express VSGs whereas procyclins are the major surface glycoprotein. Images in a-b and d were obtained using super resolution microscopy and correspond to maximal 3D projections of stacks of 0.1 μm slices; DNA was counter-stained with DAPI; scale bars 2 μm; images are representative of independent experiments using two different biological replicates.
NATuRE MICRObIOlOGy
Extended Data Fig. 5 | VeX1 and VeX2, but not tSNAP, delocalize following transcription or splicing inhibition. a, Western-blot analysis of VEX1 myc , myc VEX2 and tSNAP myc before and after sinefungin treatment (5 μg ml -1 for 30 min at 37 °C), which blocks trans-splicing in trypanosomes. The values in red, green and magenta correspond to the fold change in VEX1, VEX2 and tSNAP abundance, respectively, between non-treated and treated samples (normalized against EF1α, loading control) in four independent experiments (not-statistically significant). b, Immunofluorescence analysis of Pol I, VEX1 myc , myc VEX2 and tSNAP myc localization following actinomycin D (ActD, Pol I + Pol II inhibitor, 10 μg ml -1 for 30 min at 37 °C), BMh-21 (Pol I inhibitor, 1 μM for 30 min at 37 °C) or sinefungin treatment. c, tSNAP myc before and after ActD and BMh-21 treatment. Cells displaying no detectable signal (<10%) were excluded. Values are averages of two independent experiments (≥200 nuclei each); error bars, SD. Images in b-c were obtained using super resolution microscopy, correspond to maximal 3D projections of stacks of 0.1 μm slices and are representative of multiple biological replicates and independent experiments; DNA was counter-stained with DAPI; scale bars 2 μm. Uncropped blots (a) and detailed n and p values (c) are provided as Source Data Extended Data Fig. 5.
NATuRE MICRObIOlOGy
Extended Data Fig. 6 | Pol i and tSNAP expression and localization following knockdown of the VeX complex. a, Immunofluorescence-based analysis of VSG expression following tetracycline (Tet) inducible VEX1 knockdown, VEX2 knockdown or VEX1/VEX2 knockdown. In unperturbed cells (parental strain), VSG-2 (magenta) is the active VSG and VSG-6 (green) is a silent VSG used to monitor derepression. The stacked bar graph depicts percentages of VSG-2 single positive cells and VSG-2/VSG-6 double positive cells; values are averages of two independent experiments and two biological replicates. DNA was counter-stained with DAPI; scale bar 2 μm. b, Western-blot analysis of VEX2, Pol I, tSNAP myc , VSG-6 and VSG-2 expression following VEX1, VEX2 or VEX1/VEX2 knockdown. EF1α was used as a loading control. The data is representative of two independent experiments and two biological replicates. c-d, Immunofluorescence-based colocalization studies of tSNAP myc (SL-RNA transcription compartment) and Pol I (nucleolus and extranucleolar reservoir). The stacked bar graph in c depicts proportions of G1 nuclei with tSNAP myc / Pol I overlapping, adjacent or separate signals (these categories were defined by thresholded Pearson's correlation coefficient -see methods) following tetracycline (Tet) inducible VEX1 (48 h), VEX2 (12 h) or VEX1/VEX2 knockdown (12 h). tSNAP myc / extranucleolar Pol I localization were not monitored beyond 12 h following VEX2 and VEX1/2 knockdown as Pol I signal drops below detection at later time-points. The values are averages of two independent experiments and two biological replicates (≥100 G1 nuclei). In the violin plot in d, the 'outside edge' distance between the Pol I extranucleolar focus and tSNAP foci was measured in > 81 G1 nuclei. White circles show the medians; box limits indicate the 25th and 75th percentiles; whiskers extend 1.5 times the interquartile range from the 25th and 75th percentiles; polygons represent density estimates of data and extend to extreme values. In a/c, error bars, SD. In c-d, knockdown conditions were compared to parental cells using two-tailed paired (c) or unpaired (d) Student's t-tests; in c, statistical significance is highlighted when applicable: **, p < 0.01; ***, p < 0.001. Uncropped blots (b) and detailed n and p values (c/d) are provided in Source Data Extended Data Fig. 6.
NATuRE MICRObIOlOGy
Extended Data Fig. 7 | The exclusive association between the active VSG and the SL-locus is VeX2-dependent. a-b, DNA fluorescence in situ hybridization (FISh) and super resolution microscopy based colocalization studies of the SL-RNA transcription compartments (probe: digoxigenin labeled SL repeats) and VSG expression sites (probe: biotin-labeled 50 bp repeats) following VEX2 knockdown. a, the box and violin plots depict the average number and the size of the 50 bp repeats foci, respectively, before and after VEX2 knockdown. b, the violin plot represents the distance between both SL-arrays (before and after VEX2 knockdown. c-d, DNA FISh combined with immunofluorescence colocalization studies of the SL-RNA transcription compartments and VSG expression sites before and after VEX2 knockdown using super resolution microscopy. tSNAP myc (protein marker) and a DNA probe for the SL repeats (biotin-labeled) or DNA probes for the 50 bp repeats (VSG ESs, biotin labeled) were used (c and d, respectively). The violin plot (d) represents the distance between both tSNAP foci before and after VEX2 knockdown. For all violin plots: white circles show the medians; box limits indicate the 25th and 75th percentiles; whiskers extend 1.5 times the interquartile range from the 25th and 75th percentiles; polygons represent density estimates of data and extend to extreme values. For the box plot in a, centerlines show the medians; box limits indicate the 25th and 75th percentiles whiskers extend between the minimum and maximum values. Two-tailed paired (a, left hand side) or unpaired Student's t-tests (a, right hand side; b,d) were applied for statistical analysis. Last updated by author(s): Sep 25, 2020 Reporting Summary Nature Research wishes to improve the reproducibility of the work that we publish. This form provides structure for consistency and transparency in reporting. For further information on Nature Research policies, see Authors & Referees and the Editorial Policy Checklist.
Statistics
For all statistical analyses, confirm that the following items are present in the figure legend, table legend, main text, or Methods section.
n/a Confirmed
The exact sample size (n) for each experimental group/condition, given as a discrete number and unit of measurement A statement on whether measurements were taken from distinct samples or whether the same sample was measured repeatedly The statistical test(s) used AND whether they are one-or two-sided Only common tests should be described solely by name; describe more complex techniques in the Methods section.
A description of all covariates tested A description of any assumptions or corrections, such as tests of normality and adjustment for multiple comparisons A full description of the statistical parameters including central tendency (e.g. means) or other basic estimates (e.g. regression coefficient) AND variation (e.g. standard deviation) or associated estimates of uncertainty (e.g. confidence intervals) For null hypothesis testing, the test statistic (e.g. F, t, r) with confidence intervals, effect sizes, degrees of freedom and P value noted Give P values as exact values whenever suitable.
For Bayesian analysis, information on the choice of priors and Markov chain Monte Carlo settings
For hierarchical and complex designs, identification of the appropriate level for tests and full reporting of outcomes Estimates of effect sizes (e.g. Cohen's d, Pearson's r), indicating how they were calculated Our web collection on statistics for biologists contains articles on many of the points above. | 2021-01-13T06:17:21.499Z | 2021-01-11T00:00:00.000 | {
"year": 2021,
"sha1": "492050f26caf02094505549adb16d453a80baca0",
"oa_license": null,
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7610597",
"oa_status": "GREEN",
"pdf_src": "SpringerNature",
"pdf_hash": "781ed2b7ba746199427af9f1f418125e7c9d751f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
11812088 | pes2o/s2orc | v3-fos-license | BFH-OST, a new predictive screening tool for identifying osteoporosis in postmenopausal Han Chinese women
Purpose To develop a simple new clinical screening tool to identify primary osteoporosis by dual-energy X-ray absorptiometry (DXA) in postmenopausal women and to compare its validity with the Osteoporosis Self-Assessment Tool for Asians (OSTA) in a Han Chinese population. Methods A cross-sectional study was conducted, enrolling 1,721 community-dwelling postmenopausal Han Chinese women. All the subjects completed a structured questionnaire and had their bone mineral density measured using DXA. Using logistic regression analysis, we assessed the ability of numerous potential risk factors examined in the questionnaire to identify women with osteoporosis. Based on this analysis, we build a new predictive model, the Beijing Friendship Hospital Osteoporosis Self-Assessment Tool (BFH-OST). Receiver operating characteristic curves were generated to compare the validity of the new model and OSTA in identifying postmenopausal women at increased risk of primary osteoporosis as defined according to the World Health Organization criteria. Results At screening, it was found that of the 1,721 subjects with DXA, 22.66% had osteoporosis and a further 47.36% had osteopenia. Of the items screened in the questionnaire, it was found that age, weight, height, body mass index, personal history of fracture after the age of 45 years, history of fragility fracture in either parent, current smoking, and consumption of three of more alcoholic drinks per day were all predictive of osteoporosis. However, age at menarche and menopause, years since menopause, and number of pregnancies and live births were irrelevant in this study. The logistic regression analysis and item reduction yielded a final tool (BFH-OST) based on age, body weight, height, and history of fracture after the age of 45 years. The BFH-OST index (cutoff =9.1), which performed better than OSTA, had a sensitivity of 73.6% and a specificity of 72.7% for identifying osteoporosis, with an area under the receiver operating characteristic curve of 0.797. Conclusion BFH-OST may be a powerful and cost-effective new clinical risk assessment tool for prescreening postmenopausal women at increased risk for osteoporosis by DXA, especially for Han Chinese women.
Introduction
Effective and early therapy for osteoporosis can reduce the risk of primary fragility fractures by approximately half. [1][2][3] However, once a patient develops osteoporosis, it is nearly impossible to completely restore bone strength, since the loss of bone microarchitecture mass becomes irreversible. 4 Currently, the diagnosis of primary osteoporosis without fragility fracture is based on bone mineral density (BMD) measurement by dual-energy X-ray absorptiometry (DXA). Osteoporosis is defined by the World Health Organization (WHO) criteria as having BMD values at any site 2.5 standard deviations or more below normal values for healthy young individuals. 5 According to the National Osteoporosis Foundation (NOF) guidelines, patients should accept pharmacologic treatment when they have T-scores #-2.5 in the femoral neck, hip, or lumbar spine as measured by DXA. 6 BMD values can be measured conveniently and noninvasively by DXA, though not all physicians have access to this equipment. Due to the high price of DXA equipment, this service is not widely available in most developing countries, including the People's Republic of China, where only major hospitals have the equipment. DXA examinations are also time consuming. The cost of DXA and lack of instruments may limit its widespread use in some communities; hence, complementary approaches are required in developing prescreening tools to better identify patients at risk for primary osteoporosis and to help decide whether patients need a further DXA examination.
Therefore, it is essential to find a better osteoporosis detection methodology for use in the People's Republic of China. Multiple organizations have developed evidencebased osteoporosis screening recommendations, such as the Osteoporosis Self-Assessment Tool for Asians (OSTA), fracture risk assessment tool and weight-based criterion, 7 but the rationale used to create these recommendations is based largely on indirect evidence. Furthermore, these recommendations do not reflect the variation in fracture probability for the Chinese population and therefore must not be viewed as the "gold standard" but rather as a tool to enhance patient assessment. 8 Among these tools, OSTA is a free and effective method for identifying subjects at increased risk of osteoporosis, and the population used to develop this screening test included Chinese women. Its use could facilitate appropriate and more cost-effective use of bone densitometry in developing countries. 9 OSTA has performed well and has been found to be cost-effective for Asians in many studies. [10][11][12] However, some studies reported poor results when validating OSTA's effectiveness in identifying postmenopausal osteoporosis in a Chinese cohort. 13,14 Thus, we were motivated to develop a new screening tool for postmenopausal Chinese women to assess the risk of DXA-determined primary osteoporosis and chose OSTA as a comparison.
This study aimed to develop a new prediction model for DXA-determined primary osteoporosis, and to compare its performance with that of OSTA in identifying patients at increased risk of primary osteoporosis by DXA in a population of healthy Chinese women.
study population
We conducted a cross-sectional study, recruiting consecutive subjects from communities in downtown Beijing. The study population included healthy postmenopausal Chinese women who came for health examinations at the osteoporosis clinic in Beijing Friendship Hospital from March 2011 to September 2014 without interruption. The inclusion and exclusion criteria are presented in Table 1. Subjects with painful fragility fracture and abnormal biochemistry, including tests for renal and liver function, as well as serum levels of phosphate, total alkaline phosphatase, calcium, and thyroid-stimulating hormone, were also excluded. The subjects had never been diagnosed with primary or secondary osteoporosis, had never been treated for osteoporosis, and were without any recent painful bone symptoms. All the subjects provided written informed consent. The study was approved by the Ethics Committee of Beijing Friendship Hospital, Capital Medical University.
BMD measurements and data obtained via questionnaire
All the women came to the osteoporosis clinic in Beijing Friendship Hospital for DXA BMD measurements of the hip and spine, and were required to fill in a questionnaire, aided by a trained interviewer. The subjects provided information regarding demographic variables and clinical risk factors for osteoporosis using a structured table, and potential risk factors used in the questionnaire were identified from previous publications. These factors included age, weight, height, body mass index (BMI), personal history of fracture after the age of
1053
BFh-OsT for screening osteoporosis in postmenopausal Chinese women 45 years, history of fragility fracture in either parent, current smoking, consumption of three of more alcoholic drinks per day, age at menarche and menopause, years since menopause, and number of pregnancies and live births. 9,15,16 History of fracture means any fracture after the age of 45 years with or without low-energy trauma history. Height was measured using a stadiometer, and weight was measured using an electronic balance scale (accuracy, 0.1 kg) without shoes. The left femoral neck and the lumbar spine (L1-L4) BMDs were measured using the Hologic Discovery QDR Wi densitometer (Hologic Inc., Bedford, MA, USA). The in vivo short-term reproducibility values were all ,1% for all measurements of the lumbar spine, femoral neck, and total hip BMDs. The mean values from young Chinese women were used to calculate the T-scores: L1-L4 BMD 0.967±0.11 g/cm 2 , femoral neck 0.803±0.10 g/cm 2 , and total hip BMD 0.864±0.11 g/cm 2 . All DXA measurements were performed by an experienced technician.
According to the WHO and NOF diagnostic classifications, osteoporosis is defined arbitrarily to be present when any T-score (lumbar spine, femoral neck, or total hip) is $2.5 standard deviations below the average for young adults. 5,6 OSTA score OSTA was calculated based on age and body weight, using the following formula: [Body weight (kg) -age (years)] ×0.2 9 .
The decimal digits were then disregarded, as described in the original report. 9 For example, a 60-year-old woman whose body weight was 51 kg would have an OSTA index of: (51-60) ×0.2=-1.8. The decimal digit (0.8) was then disregarded, and the OSTA index was equal to the integer -1.
Statistical analysis
Each risk factor was evaluated as a predictor in univariate analysis. Statistically significant variables were included in the multivariate models. All statistical tests were twosided. The statistical model was constructed by using logistic regression analysis, using SPSS 19.0 (SPSS Inc., Chicago, IL, USA). The regression coefficients for age and body weight were stratified by increments of 10 years and 10 kg, respectively, because the BMDs differed significantly between strata. Smoking was answered "yes" or "no" based on whether the subject currently smoked. Alcohol consumption was scored "yes" if the subject consumed three or more drinks of alcohol daily. A drink of alcohol varies slightly in different countries from 8 to 10 g of alcohol. This is equivalent to a standard glass of beer (285 mL), a single measure of spirits (30 mL), a medium-sized glass of wine (120 mL), or 1 measure of an aperitif (60 mL). Statistical weights used in calculating the index were based on the regression coefficient for body mass (per 10 kg). Values were then multiplied by two and rounded off to yield integers. To calculate the index for each person, the statistical weight for each variable was multiplied by the patient's response (no =0, yes =1) and added to the total.
Construction of the final model
The final multivariate regression model included the following variables: age, weight, height, BMI, parent history of fractured hip, current smoking, consumption of three or more drinks/day of alcohol, and history of fracture after the age of 45 years. The regression coefficient and standard error for each variable are shown in Table 2, along with the index weights, which were calculated as described in the For age and weight, the coefficients correspond to increases of 10 years and 10 kg, respectively. Age (per 10 years) was multiplied by the index weight of -5, and the number was truncated to one digit by dropping the last digit before adding to the index. For example, if the person was 70 years old, [70/10×(-5)]=-35 would be added to the index. The same process was applied to body weight and height.
Excluding parental history of a fractured hip, current smoking, BMI, and alcohol consumption $3 drinks/day yielded a model containing only age, weight, height, and history of previous fracture. Women with higher index values tended to have higher BMD at both the hip and spine. The best cutoff index value was 9.1, which giving both high sensitivity and specificity. This value was chosen by optimizing sensitivity and specificity together in a single curve, as shown in Figure 1.
The new model has been named the Beijing Friendship Hospital Osteoporosis Self-Assessment Tool (BFH-OST).
study population
A total of 2,602 potentially eligible postmenopausal healthy women in Beijing were considered for participation in this study. Patients with a history of having taken antiresorptive medications or glucocorticoids, evidence of rheumatoid arthritis, a history or evidence of metabolic bone disease, or a painful hip fracture were excluded. More detailed exclusion criteria are shown in Table 1. A total of 881 subjects were excluded, leaving a total of 1,721 individuals eligible for the analysis. All the eligible participants were postmenopausal for more than 12 months, had resided in Beijing for more than 20 years, and had the ability to read and provide informed consent. Participant characteristics are shown in Table 3. The prevalence of osteopenia (47.36%) and osteoporosis (22.66%) were both high in the studied population. For example, a 60-year-old woman whose body weight was 45 kg and height was 160 cm with a previous fracture would have a BFH-OST index of: (45-60) ×0.5+0.1×160-1=7.5.
Discussion
A simple and accurate tool to identify the risk of osteoporosis is very important in developing countries such as the People's Republic of China. In this study population, the prevalence of osteopenia and osteoporosis was very high (osteopenia: 47.36%, osteoporosis: 22.66%). This high prevalence highlights the need for reliable screening tools to identify women at risk for fractures.
By evaluating many possible risk factors, we developed a clinical risk assessment tool for identifying DXA-determined osteoporosis in a Han Chinese population. The final index, based on age, weight, height, and fracture after the age of 45 years, was compared favorably with OSTA. Based on age and body weight alone, OSTA has been found in previous studies to be a good and simple tool with high sensitivity and acceptable specificity for identifying women at risk for osteoporosis. [19][20][21][22] However, another study reported poor results when attempting to validate the use of OSTA for
1056
Ma et al identifying postmenopausal osteoporosis in a Chinese cohort as diagnosed with lumbar spine DXA BMD measurements. 13 In our study, OSTA performed well to identify BMD loss with an AUC of 0.782. However, BFH-OST was modestly superior to OSTA, with an AUC =0.797 (95% CI: 0.777-0.851), and the difference was statistically significant (P,0.05). Furthermore, for the 49-59 year-old group, the performance was better than for the 60-89 year-old group or for the total population. BFH-OST has higher sensitivity and similar specificity to OSTA, which means that in contrast to OSTA, it may be more useful in screening for osteoporosis, because osteoporosis is asymptomatic in most patients. The high sensitivity of BFH-OST may allow it to become a simple tool in screening for osteoporosis, reducing missed diagnoses. It may help us to screen for patients with a high risk of osteoporosis who need a further DXA examination. The optimal value for BFH-OST of 9.1 was defined in this data set by optimizing the sensitivity and specificity, as shown in Figure 1B. However, it will be important to validate this cutoff in additional data sets. It will also be important to see if the BFH-OST screening tool accurately identifies women with osteoporosis in populations other than Han Chinese women.
Previous investigations indicated that advanced age and low body weight are strongly associated with low BMD and with increased fracture risk. 23 In our study, age and body weight also displayed a strong correlation with osteoporosis. However, height also correlated with osteoporosis, though it usually has been considered an irrelevant factor. 9,16,24 In fact, both a patient's weight and height should be considered when assessing the patient's degree of obesity, which is traditionally thought to be beneficial to bone and a protective factor against osteoporosis. 25 BMI uses the combination of weight and height, which has been shown to be associated with BMD. 26 In a cross-sectional study of 60 women between 10 and 19 years of age, the percent of body fat was linked to suboptimal attainment of peak bone mass. 27 We also found that BMI was associated with BMD, but it was not included in BFH-OST during logistic regression analysis.
Increased adiposity may also be linked to elevated risk of fractures. In a case-control study of 100 patients with fractures and 100 age-matched, fracture-free control subjects aged 3-19 years, high adiposity was associated with increased risk of distal forearm fractures. 28 In that study, BMI of the study population was over 30 kg/m 2 . In our study, the average BMI was 24 kg/m 2 , and the distribution of BMI values was normal. Despite this fact, body weight remained a protective factor. In a study by Lloyd et al, every unit increase in BMI was associated with an increase of 0.0082 g/cm 2 in BMD (P,0.001), and this relationship did not differ by age, sex, or race. 29 Therefore, height and weight should both be examined to assess the risk of osteoporosis because they are both relevant for evaluating the nutrient status of patients, which affects the risk of osteoporosis.
We added previous, self-described fracture history after the age of 45 years to the model as a risk factor, since it has a significant relationship with osteoporosis. Patients could not tell whether it was a fragility fracture. Previous fracture was also considered as a risk factor for osteoporosis in previous studies. 9,16,24 We believe that a history of fracture after the age of 45 years may reflect bone strength.
This study is community-based and cross-sectional but not retrospective, which distinguishes it from previous studies. The statistics were obtained simultaneously with the BMD measurement. Furthermore, the inclusion and exclusion criteria very strictly excluded the effects of secondary osteoporosis, nationality, and any antiresorptives or anabolic medications. All the subjects were long-term residents of Beijing, and the subjects were enrolled consecutively. Furthermore, this analysis proposed a new method for identifying osteoporosis based on the WHO and NOF diagnostic classification (T-score #-2.5 at the femoral neck, total hip, or lumbar spine) for patients needing treatment.
Limitations
This study also has some limitations. All the subjects were recruited from the community health clinic population, but patients were mostly located nearby the study hospital. So the study population may not fully represent the actual female population in Beijing. A larger sample of the community is necessary in future studies. The population structure of this study also differed from the actual demographic mix in Beijing, which could affect the results. Our results should be confirmed in other cohorts.
Conclusion
In conclusion, our study developed a new osteoporosis selfassessment tool (BFH-OST), which may be a simple and costeffective prescreening tool for identifying postmenopausal women at increased risk for osteoporosis. | 2018-04-03T00:29:03.245Z | 2016-08-04T00:00:00.000 | {
"year": 2016,
"sha1": "82d2a1265ac423b1690ff9301d8dedee30ac6668",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=31733",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e60ae89492b5eac6e0f04a29a7085a9196d96dde",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
116023590 | pes2o/s2orc | v3-fos-license | Smart Hybrid Micro-Grid Integration for Optimal Power Sharing-Based Water Cycle Optimization Technique
: Micro-Grid (MG) with hybrid power resources can supply electric loads independently. In case of surplus power, the neighborhood micro-grids can be integrated together in order to supply the overloaded micro-grid. The challenge is to select the most suitable, optimal and preferable micro-grid within a distributed network, which consists of islanded MGs, to form that integration. This paper presents an intelligent decision-making criteria based on the Weighted Arithmetic Mean (WAM) of different technical indices, for optimal selection of micro-grids integration in case of overloaded event due to either unusual increase in consumed power or any deficiency in power generation. In addition, overloading is expected due to excess increase or decrease in weather temperature. This may lead to extreme increase of load due to increase of air conditioning or heating loads respectively. The proposed arithmetic mean determination based on six multi-objective indices, which are voltage deviation, frequency deviation, reliability, power loss in transmission lines, electricity price and CO 2 emission is applied. This work is developed through three main scenarios. The first scenario studies the effect of each index on the integrated micro-grid formation. The second scenario is the biased optimization analysis. In this stage, the optimal micro-grids integration is based on intentionally chosen multi-objective index weights to fulfil certain requirements. The third scenario targets the optimal selection of the multi-objective indices’ effectiveness weights for power system optimum redistribution. The sharing weights of each index will be optimally selected by Water Cycle Optimization Technique (WCOT) and Genetic Algorithm (GA) addressing the system optimal power sharing through optimum micro-grids re-formation (integration). WCOT and GA are simulated using MATLAB (R2017a, The MathWorks Ltd, Natick, MA, USA). The developed work is applied to a distributed network which consists of a five micro-grid tested system, with one overloaded micro-grid. The three modules are utilized for multi-objective analysis of different alternative micro-grids. Both WCOT and GA results are compared. In addition, it is investigated to find and validate the optimum solution. Final decision-making for optimal combination is determined, aiming to reach a perfect technical, economic and environmental solution. The results indicate that the optimal decision may be modified after each individual index weight exceeds a specific limit.
Introduction
Micro-grids (MGs) play a very important role in the technical, economical and environmental aspects of power system studies. Micro-grid is a distributed system network that merges (is constructed In this paper, the technical, environmental and economic optimum power sharing alternative which can supply the overloaded MG is studied based on the multi-objective indices for the decision-making criteria. The decision-making is based on sharing weights of the indices individually. The decision-making criteria are developed through two main strategies based on three scenarios. The basic analysis method is utilized in the first strategy, which consists of both the Equally Weighted Indices Scenario (EWIS) and the Intended Targeted Weighted Indices Scenario (ITWIS). The second strategy is based on the Intelligent Optimization Scenario (IOS) which utilizes the Water Cycle Optimization Technique (WCOT) [26] compared to Genetic Algorithm (GA). The Water Cycle Algorithm (WCA) efficiency has been proved in solving complex issues with the optimum solution compared to other optimization techniques like linear programing (LP), nonlinear programming (NLP) and practical swarm optimization (PSO) [27] The paper is divided into five main sections. Section 2 illustrates the operation conditional flags and multi-objective indices for decision-making criteria. Section 3 provides an overview of the intelligent optimization scenarios (IOS). Section 4 illustrates Hybrid MG Integration Simulation and Results. Section 5 represents the paper conclusion.
Operation Conditional Flags and Multi-Objective Indices for Decision-Making Criteria
The distributed network shown in Figure 2 is constructed of N islanded micro-grids. Normally, each of them works in a stable way at steady state conditions. Each MG has a hybrid combination of Distributed Generation (DG) that consists of renewable energy resources in addition to the conventional fossil fuel resources. Under sudden abnormal conditions which lead to either power generation deficiency or overloading situation, an optimal decision should be made to select the most efficient, economical and environmental friendly MG integration alternative. The suggested optimal coupled alternative to the ill-MG may consist of only one MG or a set of integrated MGs.
For example, a distributed network, which is built of 3 MGs (N = 3), with overloaded MG (MG-1), has three power covering alternatives. The different alternatives are [{MG-2}, {MG-3} and {MG-2 & MG-3}]. Generally, if NO is the overloaded MG in a distributed network, which has N MGs, then the available alternatives Na are as follows The global central controller is responsible for the optimal decision-making, based on the active generated and consumed power data collected from the local MG controllers. The decision In this paper, the technical, environmental and economic optimum power sharing alternative which can supply the overloaded MG is studied based on the multi-objective indices for the decision-making criteria. The decision-making is based on sharing weights of the indices individually. The decision-making criteria are developed through two main strategies based on three scenarios. The basic analysis method is utilized in the first strategy, which consists of both the Equally Weighted Indices Scenario (EWIS) and the Intended Targeted Weighted Indices Scenario (ITWIS). The second strategy is based on the Intelligent Optimization Scenario (IOS) which utilizes the Water Cycle Optimization Technique (WCOT) [26] compared to Genetic Algorithm (GA). The Water Cycle Algorithm (WCA) efficiency has been proved in solving complex issues with the optimum solution compared to other optimization techniques like linear programing (LP), nonlinear programming (NLP) and practical swarm optimization (PSO) [27] The paper is divided into five main sections. Section 2 illustrates the operation conditional flags and multi-objective indices for decision-making criteria. Section 3 provides an overview of the intelligent optimization scenarios (IOS). Section 4 illustrates Hybrid MG Integration Simulation and Results. Section 5 represents the paper conclusion.
Operation Conditional Flags and Multi-Objective Indices for Decision-Making Criteria
The distributed network shown in Figure 2 is constructed of N islanded micro-grids. Normally, each of them works in a stable way at steady state conditions. Each MG has a hybrid combination of Distributed Generation (DG) that consists of renewable energy resources in addition to the conventional fossil fuel resources. Under sudden abnormal conditions which lead to either power generation deficiency or overloading situation, an optimal decision should be made to select the most efficient, economical and environmental friendly MG integration alternative. The suggested optimal coupled alternative to the ill-MG may consist of only one MG or a set of integrated MGs.
For example, a distributed network, which is built of 3 MGs (N = 3), with overloaded MG (MG-1), has three power covering alternatives. The global central controller is responsible for the optimal decision-making, based on the active generated and consumed power data collected from the local MG controllers. The decision signals are sent to the individual Interconnecting Static Switch (ISS) to be opened or closed according to the power re-distribution indices, after considering the operation conditional flags.
Four operation conditions should be checked for each MG to ensure its validity in supplying the ill-MG either alone or through a combined MG group.
where PDG (MG-i) and Pload (MG-i) are the active generated power and the consumed power of micro-grid i respectively.
If UPC is less than zero, then this MG is overloaded. The Shareable Unused Power Capacity (UPCShareable) is calculated for all the remaining MG(s) as where α is a safety margin for any sudden fault or disaster which may take place during the formation of the distributed network. It is suggested that α = 0.25 to save a generation margin equivalent to 25% of the MG's consumed power in case of any emergency power extension. As UPCShareable represents the UPC after assigning a safety margin, it is reserved to cover any sudden disturbance or overloading condition in the network. The condition for coupling the studied alternative with the ill-MG is that its UPCShareable must overcome the Power Deficiency Load (PDL), otherwise, the studied alternative may be combined with other MG sets. C1 should be checked for each MG as follows: where Na is the number of MG(s) in the same alternative.
where P DG (MG-i) and P load (MG-i) are the active generated power and the consumed power of micro-grid i respectively. If UPC is less than zero, then this MG is overloaded. The Shareable Unused Power Capacity (UPC Shareable ) is calculated for all the remaining MG(s) as where α is a safety margin for any sudden fault or disaster which may take place during the formation of the distributed network. It is suggested that α = 0.25 to save a generation margin equivalent to 25% of the MG's consumed power in case of any emergency power extension. As UPC Shareable represents the UPC after assigning a safety margin, it is reserved to cover any sudden disturbance or overloading condition in the network. The condition for coupling the studied alternative with the ill-MG is that its UPC Shareable must overcome the Power Deficiency Load (PDL), otherwise, the studied alternative may be combined with other MG sets. C 1 should be checked for each MG as follows: where N a is the number of MG(s) in the same alternative.
The second condition flag is the availability of each MG. It illustrates the status of the ISS that indicates the tie to the overloaded MG.
If C 2i flag is zero, MG-i cannot supply the overloaded MG or be shared with any other MG(s).
Voltage Deviation Flag (C 3 )
Voltage deviation (∆V) is one of the main important conditions, which must be checked before MGs coupling, to be assured within a specific limit. It is defined by the maximum voltage difference between corresponding bus and nominal voltage (V nominal ) for each MG. This deviation should be kept within the limit of ±∆V L = ±5% to avoid any failure or damage in the distributed system [28]:
Frequency Deviation Flag (C 4 )
Frequency deviation (∆F) is the maximum frequency difference between the bus and the nominal frequency (F nominal ) of each MG. Deviation in frequency may lead to a disaster, so the maximum acceptable fluctuation is ±∆F L = ±1% [29]. It is represented as where F b is the p.u. frequency bus-b in MG-i. F nominal = 1 p.u. The fourth studied condition is the frequency deviation, which must be studied to be confirmed within a certain range.
After inspecting the condition flags, the six evaluating indices should be studied to be the main assessment of the decision-making criteria. Voltage deviation index is one of the principal indices in operating any electric power system. It is determined as
Frequency Deviation Index (X 2 )
Frequency can be introduced as the backbone of the power quality. Frequency deviation index is calculated as follows:
Reliability (X 3 )
It is an indicator of customers' interruptions and customer time lost for events lasting for more than three minutes as short interruptions are neglected [21]. It is the ability of the system to perform certain tasks under specific environmental conditions for a certain period of time. Any component failure in the electric distribution network causes interruptions to the customer services, like what happened in Ekpoma Network, Edo State, in Nigeria [30] System Average Interruption Frequency Index (SAIFI) is a substantial indicator, which represents the total number of interrupted customers corresponding to the total number of all served customers during a specific period.
SAIFI =
Frequency o f Outages Number o f customers supplied (12)
Power Loss in Transmission Lines Index (X 4 )
Transmission Lines (T.L.) are the interconnecting lines between MGs, which facilitate the movement of electrical power. They are exposed to many losses, which affect the transmitting energy. Copper Loss is one of the main losses, which occur in transmission lines that depend on the length and the impedance of the line between the overloaded MG and the selected MG(s). It is presented as follows: where P Loss is the T.L. power loss (in kW). Z L is the impedance of transmission line per length, while L is the T.L. length V L is the line-to-line voltage (in V). One of the main important criteria in selecting the best alternative is the Electricity Price (E.P. in $). As each MG has its own distributed generators, its owner can sell the electricity to the neighboring MG(s) for a different price. The difference in tariff is determined according to the variation of the peak hour and the usage of conventional fossil fuel resources. E.P. index is analysed as follows: 2.2.6. CO 2 Emission Index (X 6 ) MGs have their own electrical energy generation resources. Each resource has different substantial effects on the environment. The network operator penalizes the MG owner according to the level of CO 2 emission resulting from the use of conventional fossil fuel resources. Less CO 2 emission means minimization of the penalties, which makes the alternative more desirable [31].
Decision-Making Criteria
The Decision-Making Criteria (DMC) depend on selecting the optimal alternative from a group of alternatives, considering a set of (N C ) indices. Each index has a certain sharing weight supplementary to the others where ∑ N C j=1 W j = 1. The matrix form of Decision-Making (DM) is expressed as follows: Weighted Linear Normalization (WLN) is calculated for all the input data. It is used to rescale the values of the indices [32].
Weighted Arithmetic Mean (WAM) is applied to the independent indices for decision-making. It evaluates the distinct importance of each index [32].
In this paper, the decision-making criteria is utilized by two main strategies as illustrated in Figure 3
The Water Cycle Optimization Technique (WCOT)
The Water Cycle Optimization Technique (WCOT) is inspired by the water cycle process phenomenon and is developed by Hadi Eskander et al. [26]. It is mainly based on the flow of rivers and streams into the sea. Over decades, various algorithms were used to solve optimization problems, which guaranteed obtaining the global optimal solution for the studied system. Recently, researchers have tended to use meta-heuristic algorithms based on natural inspiration. The algorithms combine the rules and randomness of the natural phenomena [33].
The WCA depends on an initial population called raindrops. The best raindrop is assumed to be the sea, then the river, then the streams which flow into the river and the sea [26].
The WCOT procedures are discussed below.
Population Initialization
In this stage, random values for the system variables are assigned, within the problem space, to be the initial raindrops (RD) Population (P RD ). The raindrop is represented as an array 1× N vars where N vars is the number of variables. N P is the Number of Raindrops (initial population). The evaluating fitness function (FF) can be expressed as follows: All rivers and streams end up in the sea, which represents the optimum solution. The raindrops form the streams, which flow directly into the sea, or the river then the sea. This can be represented as follows: (25) where N SR is the number of the rivers and the sea (the number of the rivers in addition to the sea). The number of streams (NS n ) which flow into the specific rivers or the sea are expressed as follows: Streams flow into the rivers or the sea through a distance (d) as illustrated in Figure 4, and Equation (27).
where C is between 1 and 2. The detected best value of C is 2. The distance X varies always between 0 and (C × d).
where Nvars is the number of variables. NP is the Number of Raindrops (initial population). The evaluating fitness function (FF) can be expressed as follows: All rivers and streams end up in the sea, which represents the optimum solution. The raindrops form the streams, which flow directly into the sea, or the river then the sea. This can be represented as follows: where NSR is the number of the rivers and the sea (the number of the rivers in addition to the sea). The number of streams ( ) which flow into the specific rivers or the sea are expressed as follows: Streams flow into the rivers or the sea through a distance (d) as illustrated in Figure 4, and Equation (27). If the FF given by the stream is better than its connecting river, the positions of the river and the stream should be exchanged. In addition, such exchange may happen between the river and the sea as shown in Figure 5.
where C is between 1 and 2. The detected best value of C is 2. The distance X varies always between 0 and (C × d).
If the FF given by the stream is better than its connecting river, the positions of the river and the stream should be exchanged. In addition, such exchange may happen between the river and the sea as shown in Figure 5. The new positions of the river and the stream are as follows:
The Evaporation Process
Evaporation is an important factor for preventing rapid convergence. To avoid trapping in local optima, the water of the sea evaporates as rivers and streams flow into the sea. The river flowing into the sea is determined by Equation (30) where i = 1, 2, …, NSR-1. Dmax is a number close to zero. If the distance between the river and the sea is less than dmax, it means that the sea and the river can join each other naturally. After sufficient The new positions of the river and the stream are as follows:
The Evaporation Process
Evaporation is an important factor for preventing rapid convergence. To avoid trapping in local optima, the water of the sea evaporates as rivers and streams flow into the sea. The river flowing into the sea is determined by Equation (30) where i = 1, 2, . . . , N SR -1. D max is a number close to zero. If the distance between the river and the sea is less than d max , it means that the sea and the river can join each other naturally. After sufficient evaporation, the precipitation process commences. On the other side, the search intensity near the sea is reduced, as the distance is greater than d max . In general, d max mainly controls the intensity search near the sea which represents the optimum solution and it can be decreased by:
The Raining Process
This process is applied after the evaporation condition is fulfilled and satisfied. In this stage, the raindrops form streams in the different locations, which flow into the river or directly into the sea. The new location of the formed streams which only directly flow into the sea is expressed as follows: where LB and UB are the lower and upper bounds, respectively. The optimum solution of the streams which directly flow into the sea is explored as follows: where √ µ represents the standard deviation, as µ is the variance coefficient which depends on the searching region within a range around the sea. The small value µ indicates that the algorithm searches in a small region. For a suitable region, µ is set to be 0.1.
Convergence Criteria
In WCOT, the optimization process progresses until the convergence criteria (termination condition) are achieved.
The Water Cycle Optimization Technique Algorithm is explained by the flowchart in Figure 6. The WCOT flowchart explains each step from the initial population until the convergence criteria.
Genetic Algorithm
In this paper, a comparison is held between the Water Cycle Optimization Technique (WCOT) and the Genetic Algorithm (GA). The topology of the Genetic Algorithm is based on the biological evolution process of computational data and the mechanism of natural genetics selection [34,35]. GA is composed of three main significant operators, which are reproduction, crossover and mutation. These operators result in an optimum solution using a fitness function, as it maps the natural objective function.
Both WCOT and GA are utilized to produce the optimal weighted solution for each alternative as a step in the decision-making algorithm program. The weighted arithmetic mean is the objective function (fitness function) to find the global optimum for the decision-making.
The flowchart of the main outlines of decision-making criteria, starting with calculating the UPC actual for all MGs to check if interconnection is required or not, is displayed in Figure 7. If any MG is flagged as an overloaded MG, the operation conditional flag (IOS problem constraints) must be calculated for each other MG to check the validity of interconnection with the overloaded MG. The optimum solution for supplying the overloaded MG is based on the weights of indices. The weights of indices are studied by three scenarios, which are the Equally Weighted Indices Scenario (EWIS), the Intended Targeted Weighted Indices Scenario (ITWIS) and the Intelligent Optimization Scenario (IOS) based on the Water Cycle Optimization Technique (WCOT) and the Genetic Algorithm (GA). A command signal is sent to the relevant interconnecting switch (ISS) to be closed.
where √ represents the standard deviation, as µ is the variance coefficient which depends on the searching region within a range around the sea. The small value µ indicates that the algorithm searches in a small region. For a suitable region, µ is set to be 0.1.
Convergence Criteria
In WCOT, the optimization process progresses until the convergence criteria (termination condition) are achieved.
The Water Cycle Optimization Technique Algorithm is explained by the flowchart in Figure 6. The WCOT flowchart explains each step from the initial population until the convergence criteria. Rivers flow to the sea.
FFRiver better than FFSea
Evaporation condition is satisfied?
Convergence criteria is fulfilled?
Exchange the stream position with the corresponding river.
Exchange the river position with the sea.
Create clouds, and start raining process, then decrease the value of dmax Yes Yes Figure 6. Flowchart for the Water Cycle Optimization Technique [26]. Figure 7. Flowchart for the decision-making algorithm. Figure 7. Flowchart for the decision-making algorithm.
Hybrid MG Integration Simulation and Results
In the proposed system under study, the distribution network consists of 5 isolated Micro-MG(s). At normal operation, each micro-grid supplies its own load with its own distributed resources. The distributed network is fully controlled using a continuous global controller. The global controller checks if any MG(s) has/have any deficiency. The global controller detects deficiency by comparing the obtained data from measurement with the data in Table 1. Table 1 represents 6 indices; load power-generated power, reliability factor, SAIFI, CO 2 emission, voltage deviation, and frequency deviation for each MG. In Table 2, the data related to the transmission lines between all MG(s) are presented. Table 2 shows that the five MGs have a closed range along the transmission lines (km) and impedances (Ohm/km) as power is transmitted in a medium voltage range with 66 kV. In addition, Table 1 declares that MG-2 will be flagged as an overloaded MG corresponding to Equation (2), because load (72 kW) is greater than generation (54 kW) in MG-2. MG-2 cannot supply its own load by itself under normal conditions. The supply of overloaded MG can be done by one of fifteen alternatives, each of which has six indices with six different weights. To sum up the six indices which are relating to the six different indices, the indices have to be first normalized. Each alternative has its own topology, so a linear normalization is determined for each alternative. Linear normalization has been done to select the optimum alternative with respect to the other alternatives as in Table 3. Operational Conditional flags (IOS problem constraints) should be studied for all MGs. MG-4 is flagged to show that the shareable unused power capacity does not satisfy its own load after taking into account the safety margin as in Equation (3). MG-4 cannot supply the overloaded MG by itself but it can share a specific power with any neighboring MG to supply the overloaded one.
Basic Analysis Methods
The basic analysis methods are divided into two scenarios. The first scenario is the Equally Weighted Indices Scenario (EWIS). The second scenario is the Intended Targeted Weighted Indices Scenario (ITWIS). The first scenario assumes that all indices are equally weighted which means that W 1 = W 2 = W 3 = W 4 = W 5 = W 6 = 0.16667. All indices have been studied for each alternative as represented in Figures 8-13, based on the results of Table A1 (Appendix A). Each index has its own optimum solution alternative which differs from one index to another. Decision-making criteria are studied to merge all indices to have the optimum alternative as in Equation (20) and represented in Figure 14. Figure 15 is the zoomed version of Figure 14, (by making the reference 0.1063). The decision algorithm is flagged to show that MG-1 (1st alternative) is the optimum solution for all the indices compared with the other alternatives.
The Intended Targeted Weighted Indices Scenario (ITWIS)
The second scenario is based on changing the weight of only one index and making all the remaining indices equal in weight. If W is the weight of 1st index, then the weights for all the remaining indices will be ( − ). It is concluded that if the increasing or decreasing of the chosen indices exceeds a specific limit, the decision-making of optimum alternative differs from one index to another as represented in Figures 16-18.
The Intended Targeted Weighted Indices Scenario (ITWIS)
The second scenario is based on changing the weight of only one index and making all the remaining indices equal in weight. If W is the weight of 1st index, then the weights for all the remaining indices will be ( − ). It is concluded that if the increasing or decreasing of the chosen indices exceeds a specific limit, the decision-making of optimum alternative differs from one index to another as represented in Figures 16-18.
The Intended Targeted Weighted Indices Scenario (ITWIS)
The second scenario is based on changing the weight of only one index and making all the remaining indices equal in weight. If W is the weight of 1st index, then the weights for all the remaining indices will be ( 1−W 5 ). It is concluded that if the increasing or decreasing of the chosen indices exceeds a specific limit, the decision-making of optimum alternative differs from one index to another as represented in Figures 16-18. Table A2 (in Appendix A) illustrates the data of Figures 16-18, in which the weights (W) of the frequency deviation, reliability and transmission line power loss indices are changed gradually from 0.05 to 1. The corresponding weighted arithmetic mean (X i ) is calculated to determine the optimal selected alternative for the decision-making step. The variation effect of the weights of voltage deviation, electricity price and CO 2 emission indices on the optimal decision-making is explained in Table A3 (Appendix A).
relatively smallest impedance is between MG-5 and MG-2, which affects the transmission power losses indices. The optimal decision-making is modified as shown in Figure 19, as explained in Table A4 (Appendix). MG-5, which has the smallest distance to the overloaded Micro-Grid (MG-2), is selected as the optimum solution. It is observed that due to the close distribution between the MGs and the closeness of transmission lines and their impedances, all results tend to MG-1 as shown in Figure 18. As shown in Tables A2 and A3 (Appendix A), variation in the weight of the indices below the validation border 0.05 for each index (limits violation case) may lead to different decision-making, with better results than the WCOT and GA. The results show that some indices are excluded by taking a lower weight corresponding to the other indices.
To check the effect of the intermediate linking between the optimal selected alternative, and transmission lines and their impedances, a change is executed on the transmission line lengths and impedances between MGs. The impedance of transmission line between the overloaded Micro-Grid (MG-2) and (MG-5) is reduced to 0.13 Ohm/km instead of 0.23 Ohm/km as shown in Table 4. The relatively smallest impedance is between MG-5 and MG-2, which affects the transmission power losses indices. The optimal decision-making is modified as shown in Figure 19, as explained in Table A4 (Appendix A). MG-5, which has the smallest distance to the overloaded Micro-Grid (MG-2), is selected as the optimum solution.
The Intelligent Optimization Scenario (IOS) Results
The third scenario is obtained by applying the Water Cycle Optimization Technique (WCOT) and the results will be compared with the Genetic Algorithm (GA) as in Table 5. The operation conditional flags are considered to be the artificial intelligent algorithm constraints. The WCOT and GA are operated for each alternative. It is observed that the optimum solution calculated from the WCOT is better than GA as the optimum solution in Table 5. Table 5. Comparison between the Water Cycle Optimization Technique (WCOT) and the Genetic Algorithm (GA).
The Intelligent Optimization Scenario (IOS) Results
The third scenario is obtained by applying the Water Cycle Optimization Technique (WCOT) and the results will be compared with the Genetic Algorithm (GA) as in Table 5. The operation conditional flags are considered to be the artificial intelligent algorithm constraints. The WCOT and GA are operated for each alternative. It is observed that the optimum solution calculated from the WCOT is better than GA as the optimum solution in Table 5. From these Tables, the results reveal the need for the optimization technique to find the optimal solution due to the complexity of the targeted variables and the small applied range. The Water Cycle Optimization Technique (WCOT) shows the power over the GA and the heuristic techniques in the first and second scenarios. The only value that indicates maximum objective function (optimal solution) than the WCOT was obtained by violating the lower constraints.
Conclusions
This paper presents an optimal, efficient, reliable, economical and eco-friendly power sharing solution in case of overloaded or insufficient power generation in a hybrid micro-grid, through its integration with other neighboring micro-grids. The optimal selection is built on one of the three studied scenarios, which are based on the weighted arithmetic mean of the six multi-objective indices and the four operation conditional flags. The six indices are voltage deviation, frequency deviation, reliability, power loss in transmission lines, electricity price and CO 2 emissions, respectively. The first scenario module is the basic Equally Weighted Indices Scenario (EWIS), through which the effect of each index on the optimum combination is studied. The second scenario, which is called the Intended Targeted Weighted Indices Scenario (ITWIS), studies the optimal combination based on maximizing the effect of one of the indices over the others through its sharing weight. It progresses through step changing the weight of the selected index while keeping all the other indices equally weighted. The third scenario is the Intelligent Optimization Scenario (IOS). It utilizes the Water Cycling Optimization Technique (WCOT) to assign the global optimal MG integration with its six indices optimum sharing weights. The WCOT selections are compared with the Genetic Algorithm (GA) optimal solutions. The studied modules are applied to a distribution power network, which consists of five hybrid MGs, with one overloaded MG. The results indicate the optimal technical, economical and environment friendly MGs integration. It is observed that the optimum solution, which satisfies the minimum risk value for each index and indicates the highest fitness function value, is determined by the WCOT. From the obtained results, it is concluded that for all indices, and consequently their weights, the cost function is not sensitive to their variation within a certain limit of the individual index. When this limit is exceeded, the optimal decision may be reconsidered.
Conflicts of Interest:
The authors declare no conflict of interest. Table A1 explains the value of each weighted index for all alternatives of the equally weighted indices scenario (W = 1/6). The optimum solution for each index is represented by the parts highlighted in grey and it varies from an index to another. The result of the decision-making criteria after merging all indices emphasizes that Alternative-1 (A-1/MG-1) is the optimum solution. The selection based on that MG-1 has the maximum value of the objective (X i ) as represented by the parts highlighted in red.
Appendix A
The effect of the intended targeted weight change of the six indices on the decision-making is presented in Tables A2 and A3. Table A2 illustrates the gradual variation in the weights of frequency deviation, reliability and power loss in transmission line indices, while the weights variation of voltage deviation, electricity price and CO 2 emission indices are presented in Table A3. The results validate the optimal solution provided by the Water Cycle Optimization Technique (WCOT). When the index's weight exceeds a certain limit, the optimal decision will be changed. Table A1. The equally weighted arithmetic mean decision-making matrix for different aggregators.
Alter-natives
Participating Micro-Grid Table A4 explains the effect of the intended targeted weight change of the power loss in transmission line index on the decision-making, for the modified power network with the upgraded transmission lines between MG-2 and MG-5. The decision-making tends to select MG-5 as an optimal alternative, which has the smallest distance to the overloaded micro-grid (MG-2). | 2019-04-16T13:28:07.313Z | 2018-04-27T00:00:00.000 | {
"year": 2018,
"sha1": "a745189e489e7f1a90a7c96f65e0e4fdef2e6ee2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/11/5/1083/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "56f26389935f447e9869474172f02576bde622b4",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
3799848 | pes2o/s2orc | v3-fos-license | A palaeoparasitological analysis of rodent coprolites from the Cueva Huenul 1 archaeological site in Patagonia (Argentina)
The aim of the present study was to examine the parasite fauna present in rodent coprolites collected from Cueva Huenul 1 (CH1), northern Neuquén (Patagonia, Argentina), an archaeological site that provides stratified sequences of archaeological and palaeontological remains dating from the Late Pleistocene/Early Holocene Transition to the Late Holocene period. Twenty rodent coprolites collected from different sedimentary units from the site, with ages ranging from 13.844 ± 75-1.416 ± 37 years BP, were examined for parasites. Each coprolite was processed as a whole: rehydrated, homogenised, spontaneously sedimented and examined using light microscopy. The coprolites and the eggs of any parasites present were described, measured and photographed. In all, 158 parasite eggs were found in 10 coprolites. The faeces were positive for Viscachataenia quadrata Denegri, Dopchiz, Elissondo & Beveridge and Monoecocestus sp. Beddard (Cestoda: Anoplocephalidae) and for Heteroxynema (Cavioxyura) viscaciae Sutton & Hugot (Nematoda: Oxyuridae). The coprolites examined were tentatively attributed to Lagidium viscacia Molina (Mammalia, Rodentia, Caviomorpha, Chinchillidae). The life cycles of these parasites are discussed.
Recently, new research was initiated at Cueva Huenul 1 (CH1) (36º56'45"S 69º47'32"W), northern Neuquén (Patagonia, Argentina) (Fig. 1), an archaeological cave that provides a stratified sedimentary sequence ranging from the Late Pleistocene to the Late Holocene and in which well-preserved coprolites were found.The excavations provided a 1.4-m sequence composed of two sets of lithostratigraphic units.Basal units VIII-V of the sequence have a high content of organic matter.This material is composed primarily of megafauna dung remains.The radiocarbon dates of these units range between 13.844 ± 75-11.841± 56 years BP.In contrast, the second stratigraphic set (units IV-I) has a lower abundance of organic matter, with predominant aeolian sedimentation and is dated between 9.531 ± 39-1.416 ± 37 years BP.The site presents evidence of a very brief, but redundant use of the cave by humans (Barberena et al. 2010) during different stages of the human settlement of northern Patagonia.This paper presents the first palaeoparasitological examination of the coprolites of CH1.
The aim of the present study was to examine the parasite fauna present in rodent coprolites collected from CH1 and to identify the parasitic remains and the host origin of the faeces to assess the parasite life cycles and the importance of rodents in the area studied.In conjunction with other analyses currently in progress, this study will contribute to a palaeoecological reconstruction of Patagonian ecosystems through time.
MAteRiAlS ANd MetHOdS
Twenty rodent coprolites obtained from different units (II, V, VI and VII) of CH1 were examined for parasites.The coprolites were inventoried and processed individually (Fugassa 2006b).The examination, consisting of the external observation of the faeces, was conducted according to Chame (2003) and Jouy-Avantin (2003).Each coprolite was fully processed by rehydration in a 0.5% aqueous tris-sodium phosphate in a glass tube for one week at 4ºC, then homogenised and processed by spontaneous sedimentation (Lutz 1919, Callen & Camer-on 1960) and preserved in 70% ethanol.Ten slides were prepared from each coprolite and one drop of glycerin was added to each slide.The slides were examined with light microscopy.The eggs of the parasites were measured and photographed at 40X magnification.
ReSultS
The coprolites were dark brown and concave to conical, with a smooth surface.One extreme was dull and the other was sharp (Fig. 2).The average measurements of the faeces were 11.70 ± 1.88 mm long by 4.72 ± 0.61 mm wide; the average weight was 0.10 ± 0.03 g.
Ten coprolites contained eggs of three parasites: The eggs of H. viscaciae (Fig. 3) were thick-walled with an operculum at 1 pole and were collected from two coprolites from unit II.The egg measurements (n = 90) ranged from 122.5-147.5 (133.39 ± 4.10) µm long and 55.0-72.5 (62.75 ± 4.48) µm wide.The eggs were oblong and asymmetrical with a convex side and a concave side, with a rounded pole and a more acute pole.An operculum was observed at the sharper pole.
Eggs of V. quadrata (Fig. 4) were collected from units VII, VI, V and II.These eggs were thick-shelled and fourlobed in shape.The oncosphere was not measured and was surrounded by an elongate pyriform apparatus (n = 3) 39.17 µm long by 26.67 µm wide.The size ranges (means) of the 13 eggs that were measured were 75-90 (82.69 ± 5.44) μm long by 75-100 (91.73 ± 8.74) μm wide.
A total of eight Cestoda eggs belonging to family Anoplocephalidae and with characteristics attributable to Monoecocestus sp. were collected from unit II (Fig. 5).The size ranges (means) of the four eggs that were measured were 54.5-60 (59.15 ± 1.3) μm long by 52.5-62.5 (56.7 ± 1.5) μm wide.
diSCuSSiON
Based on the characteristics of the eggs of the parasites found in this study and on knowledge regarding the parasite fauna of modern viscachas, the faeces were ten- tatively attributed to L. viscacia (Caviomorpha: Chinchillidae) (Hugot & Sutton 1989, Denegri et al. 2003).
The family Chinchillidae contains chinchillas, viscachas and their fossil relatives.The family is restricted to southern and western South America.Cabrera (1961) recognised three species of Lagidium, Lagidium peruanum, L. viscacia and Lagidium wolffsohni.The southern viscacha (L.viscacia) is found in Argentina, Bolivia, Chile and Peru.Reig (1986) stated that chinchillids predominate (60%) among the living fauna of Andean localities and that Lagidium and Chinchilla are Andean endemics that became adapted to arid Andean valleys by the Miocene.He also reported that a foundational immigrant stock is known.The first representatives of this stock are well documented for the early Oligocene of Patagonia.These species originally occupied low lands and forested areas and secondarily invaded Andean zones.Viscacha bones were recovered in the archaeological faunal assemblage of CH1 in low frequencies.These animals may have been brought to the cave as food by humans occupying the site, but the bones do not show cuts that would verify this use.Viscachas may also have occupied the cave (their occurrence was recorded during the field work).
The anoplocephaline cestodes (Cyclophyllidea: Anoplocephalidae) represent a diverse group of parasites infecting both terrestrial mammals (placentals and marsupials) and birds.Based on the number of genera present in these hosts, the most important radiation of anoplocephalines has been in rodents and lagomorphs (Beveridge 1994, Wickström et al. 2005).The intermediate hosts of these cestodes are oribatid mites, which are ingested by their herbivorous definitive hosts (Beveridge 1994).Anoplocephalids are parasites of zoonotic importance for animals and humans (Denegri et al. 1998, Taylor et al. 2001).
Two species of anoplocephalid cestodes have previously been described in viscachas belonging to Lagidium Meyen (Rodentia: Chinchillidae) from South America.These anoplocephalid species are Cittotaenia quadrata von Linstow, 1904 and Cittotaenia pectinata Goeze, 1782, parasites of Lagidium peruanum and L. viscacia, respectively.V. quadrata Denegri, Dopchiz, Elissondo & Beveridge, 2003 was subsequently proposed to accomodate C. quadrata (Denegri et al. 2003).Tantaleán et al. (2009) stated that in addition to the specimens of C. quadrata von Linstow 1904, the parasites of L. peruanum must also be recognised as V. quadrata.Denegri et al. (2003) hypothesised that the biogeographical relationships of the genus Viscachataenia suggest that it is derived from Monoecocestus, a genus that primarily parasitises South American rodents by duplication of the genitalia.
The eggs of V. quadrata, as found in CH1, were previously recorded in living L. viscaciae collected in Argentina and Peru (Denegri et al. 2003, Tantaleán et al. 2009).
Palaeoparasitological occurrences of anoplocephalid cestodes were previously reported for different archaeological sites in Patagonia.Eggs of Monoecocestus sp. were found in rodent coprolites from Alero Mazquiarán (Sardella & Fugassa 2009a) and from Alero Destacamento Guardaparque (Sardella et al. 2010).The eggs of Monoecocestus sp.observed in CH1 resemble those previously found in Holocenic samples from Patagonia.The presence of eggs attributable to V. quadrata and Monoecocestus sp.adds the CH1 cave, located northwest of the locations of previous findings, to the record of anoplocephalids in Patagonia.
The oxyurid nematodes include monoxenic parasites that live in the posterior third of the digestive tract of various vertebrates and arthropods (Anderson 2000).The family Heteroxynematidae includes nematodes that evolved in sciuromorph, caviomorph and myomorph mammals.In addition, this family includes the primitive Heteroxynema sp.suggested by Hall (1916) for Heteroxynema cucullatum, a parasite of the rodent Eutamias amoenus operarius from North America.This nematode genus was subsequently divided into three subgenera, with Cavioxyura spp as parasites of Neotropical Caviomorpha (Petter & Quentin 1976).
In Argentina, Teixeira de Freitas and Lins de Almeida (1936) reported the presence of Heteroxynema werneckii in the intestine of the caviid Galea leucoblephara from the northern area of Jujuy Province.H. (C.) visca- ciae was described by Sutton and Hugot (1989) from L. viscaciae collected from Chubut Province.Foster et al. (2002) and Ferreira et al. (2007) confirmed the presence of this parasite in wild viscacha Lagostomus maximus from La Pampa and Chaco Provinces.
Heteroxynema sp. was recently found in rodent coprolites collected from Cerro Casa de Piedra, Santa Cruz Province, Argentina (Sardella & Fugassa 2011).Based on the morphological characteristics and measurements of both the coprolites and the eggs found in CH1, the oxyurids found at the CH1 and Cerro Casa de Piedra archaeological sites, separated by ca.1.500 km, were attributed to two different species, Heteroxynema sp. and H. viscaciae, respectively.Hugot and Sutton (1989) stated that the species related most closely to H. viscaciae is Heteroxynema (Cavioxyura) chiliensis Quentin, 1975, a parasite of Octodon degus (Molina) from Chile.Perkins et al. (2005) stated that rodents represent one of the most important sources of zoonoses for mammals and that increases in the population densities of rodents resulted in the dispersal of zoonoses and brought them into closer contact with humans.The oxyurid nematodes found in this study are not presently considered zoonotic.However, anoplocephalids can cause human disease if people eat mites found in the soil (Denegri et al. 1998).It is probable that humans living in CH1 were exposed to V. quadrata and/or to Monoecocestus sp.during the time period considered because the earliest human presence in CH1 is considered to have occurred after approximately 10.000 years BP.The anoplocephalids are known to be accidental causative agents of human illness (Denegri et al. 1998, Taylor et al. 2001).Despite the brevity of the human use of the cave (Barberena et al. 2010), huntergatherers and animals could have been exposed to parasitic zoonoses and anthroponoses. | 2018-03-09T21:43:56.823Z | 2012-08-01T00:00:00.000 | {
"year": 2012,
"sha1": "51dd318d1a2c4abbedacb54e0cad677e1bbed8c5",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/mioc/a/FHVWmNFRnTm4chQDrXHnmvr/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "51dd318d1a2c4abbedacb54e0cad677e1bbed8c5",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
226338408 | pes2o/s2orc | v3-fos-license | Matrix techniques for Lamb-wave damage imaging in metal plates
The implementation of efficient maintenance strategies of thin-walled structural components require reliable damage detection and localization techniques. In particular, guided ultrasonic waves technology represent an auspicious approach when implemented in a structural health monitoring system. The method is usually based on distributed sensing with piezoelectric elements that act in turn as ultrasound transmitter and receiver. This work aims at a unifying framework for damage localization considering algorithms from different scientific disciplines, e.g. originated from radar and geophysics. Here, we systematically express those algorithms in matrix form and compare the respective damage localization performance with experimental measurements considering an isotropic specimen with a single and also multiple simultaneous defects. In addition, we evaluate the algorithms’ point spread function and propose performance metrics to quantitatively compare the imaging success.
Introduction
In the field of guided-wave Structural-Health-Monitoring (SHM) the localization and classification of structural damage is generally the main goal. Ultrasonic transducers arranged as a spatially distributed array are being used [1][2][3]. Of particular interest for targeted maintenance activities is the damage position extracted inversely from ultrasound signals. Therefore a 3 This author contributed to equally Original Content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. wide range of model-based algorithms [4][5][6][7], like Delay-and-Sum beamforming (DAS), synthetic focusing aperture technique, or Migration operators from geophysics can be used. Despite their different scientific heritage and the variety in implementations, those algorithms have the same fundamental principles. Firstly, they perform a virtual back-propagation of the measured wave field to its point of origin in time and space. Secondly, by applying a suitable imaging condition the back-propagated scattered wave field is simplified for the end user in terms of an interpretable information like reflectivity or energy spanning the spatial domain. In figure 1 an overview which unifies several approaches is presented.
For the purpose of modelling, a suitable linear wave equation has to be chosen, which includes most observable effects that are present in the data and which is able to account for the geometry of the investigated structure as well as the potentially heterogeneous spatial distribution of elastic parameters and therefore the distribution of velocities. The back-propagation is realised by connecting the fundamental solutions G (Green's function) of the modelled problem with the measured scattered wave field. Precisely, for calculations in the frequency domain the conjugated Green's function G * is used, whereas in time domain simply inverting the data is adequate. This virtual back propagation is possible due to the symmetry of any linear wave equation regarding the time axis. However, using a signal from a single observation point generally will not result in a sufficient reconstruction of the complete former scattered wave field. Instead the data of an adequate amount of observation points has to be used. This whole process is known as Time reversal mirror in the field of Non-Destructive Testing [8] and is the basic principle behind any geophysical migration operator. In theory, there is a discrimination whether the observation points are treated as boundary conditions on the surface or as arbitrarily distributed points inside the observation domain. In this work the latter one is used, whereas a comparison between both approaches for an analogous geophysical setting can be found in [9].
If an analytic version of Green's function is unknown, for example in the case of complex geometries or non-trivial spatial variation of the elastic parameters, the wave propagation can be solved numerically, which is known under the term reverse-time-migration (RTM) [10,11]. For the underlying mathematical principles the chosen numerical method is irrelevant, but in terms of numerical accuracy, effort and parallelization capabilities it has significant influence on the process. [7] applied RTM to a SHM problem, modelling an elastic homogeneous and isotropic medium, whereas [12] generalized it for elastic anisotropic models. Both authors used a wave equation based on the Midlin plate theory, modelling the propagation of flexural waves and therefore using the A 0 Lamb mode for the damage visualization. [13] reformulated the RTM as a least-squares problem for reflectivity mapping in geophysics, while [14] used this in the context of guided-wave SHM.
Instead of a complete numerical solution of the underlying wave equation, Green's function can be approximated by a ray-theoretical problem or two hyperbolic partial differential equations (eikonal-and transport equation) [15]. Nevertheless, the approximation will reduce the propagator implicitly to a certain phase like first or direct arrivals without multiple reflections. This circumstance can be avoided by explicitly modelling certain phases and add them to the propagator [16].
By applying an imaging condition the multi-dimensional result in space-time or space-frequency is either reduced or transformed into an interpretable quantity. Autocorrelation can be used to calculate the energy of the scattered wave field over the whole observation time [17,18], whereas the cross-correlation of different wave types indicate converted energy [17]. Both correlation types are generally implemented without any time lag, assuming the velocity function and therefore Green's function is sufficiently accurate [19]. Taking into account additional lags in time and space can increase the accuracy or can be used for estimating model adaptations [20]. Cross-correlation of the scattered wave field with former incident wave field itself has a special significance, since it connects scattering with the principle of back propagation [21,22]. The imaging result can be interpreted as reflectivity or perturbation in the velocity model and is used nowadays in full waveform inversion routines.
In this work a unifying framework for damage imaging techniques formulated in matrix notation is presented. In particular, methodologies from different scientific fields have been implemented and tested with experimental measurements from an isotropic medium. The imaging results are based on a single damage but also multiple damages at the same time.Throughout this report, we here assume baseline reference measurements from the undamaged structure and furthermore process the differential signal, i.e. the residual between measurements from the damaged structure and the aforementioned reference state.
The remainder of this paper is organized in the following way: The mathematical background of the utilized imaging techniques is provided in section 2, where a DAS-type algorithm serves as the benchmark. In section 3 the experimental setup is described. In sections 4 and 5 results are presented and discussed, respectively. Conclusions are finally given in section 6.
Mathematical description of matrix imaging techniques
The isotropic linear guided wave propagation can be asymptotically described by the Helmholtz equation of the form: ( Here u(x, ω) is a simplified scalar wave field associated with a certain guided wave mode of interest, at the location x, for the circular frequency ω after the excitation of the source function f at the source location s. The simplification is necessary since conventional piezoelectric transducers only measure a scalar voltage instead of the three-component elastic wave field. As a result the following modelling excludes other wave modes, mode conversion and the treatment of polarization. Nevertheless, the frequency-dependent phase velocity c of the target wave mode implicitly includes the dispersion of the simplified amplitude, if the solution is calculated for all frequencies of interest. Additionally the background velocity model c 0 (x, ω) is disturbed by a perturbation α(x), so that: Due to the imposed linearity, the wave field can be split into an incidence u i and a scattered wave field u s : Accordingly, the incident wave field can be considered as the guided wave SHM baseline measurement, whereas the scattered wave field represents the difference between a damaged state and the baseline. In realistic applications with varying additional mechanical loads or environmental conditions this becomes more complicated [23,24], requiring a sufficient amount of observations and accurate compensation or selection strategies [25,26]. Using the assumptions equations (3) and (2) in equation (1) leads to the Lippmann-Schwinger equation: by integrating over the whole observation domain Ω, the frequency range of interest and using Green's function G [27,31]. The non-linear equation includes multiple scattering at the sensor location r and therefore all damage cases which could be asymptotically described by the chosen wave equation.
For the sake of simplicity, the disturbance α is imposed as considerably small compared to the background velocity, so that the interaction of the scattering field with the anomaly itself can be neglected. This assumption is known as Born approximation and simplifies above equation further to a linear integration: describing the following synthetic experiment: The source pulse propagates from the transmitter location s to any point in observation domain x. The resulting incidence wave field is interacting linearly with the anomaly distribution, causing a set of scattering signals, which are finally all independently propagated to the receiver location r and superimposed to the artificial measurement. In the following the equation will be referred as forward operator.
The characteristic of an operator becomes more clearly, if equation (5) is reformulated as a discrete linear equation of the form: for all K frequencies, R receivers and all N points of interest inside the domain: Since most transducers in SHM networks can usually act as source and receiver consecutively, a complete dataset consist of various measurements of the above form with S different sender positions x s . As a consequence the data space in equation (6) of size K × R can be increased to M = K × R × S by stacking the different datasets, without changing the model space of size N. In the following, this stacked artificial measurement setup is implicitly assumed. Conventional imaging of the velocity perturbation is realised by the adjoint of the forward operator [28]: which is the conjugated and transposed operator matrix F T multiplied with the data. For the chosen discretization and including all source points, this corresponds to socalled prestack migration [11], while in conventional Non-Destructive Testing this process is known as Total Focusing Method [29]. A physical interpretation of the adjoint can be derived by applying it on equation (5), the integral equation of the forward operator: Transformed into time domain; the expression would correspond to the zero-lag cross-correlation of the forward propagated second-order time derivative of the source pulse and the scattering field. The latter one is realized by introducing the foremost measured data at all receiver locations as virtual sources and by propagating it backwards in time. Above formulation is usually implemented without the computer memory intense assembling of the linear equation system itself.
The adjoint has the advantage of being a unique, stable and cost efficient operation, but it will not necessarily map the correct scattering intensities since it only re-adjusts phase or time shifts, respectively. Additionally it consists of noise, exhibits truncation due to limited aperture, is incomplete or suffers from aliasing due to coarse sensor spacings, finally leading to imaging artefacts [28,30]. This issue can be overcome by calculating the inverse of equation (6), which can also be interpreted as deconvolution of equation (5) [31] Since the operator matrix F is mostly not square nor invertible, the model vector m is calculated indirectly by solving an optimization problem for a given L p -norm. [30] used a preconditioned conjugated gradient least-squares scheme with p = 2 and a Kirchhoff-Migration operator, while [13] used the same method for a RTM operator. [32] compared the method of [30] with various optimization problems formulated in the L 1 -and L 0 -norm, giving compressed sensing solutions. Compressedsensing methods and sparse representation techniques [4,33] are promising approaches for enhancing imaging results.
In the work at hands, results for different operators and solvers are compared for a typical SHM experimental setup. Beside the adjoint or conventional solution, the least-squares method (LSM) of [30], the orthogonal matching pursuit (OMP) implemented by [34] and the SPGL1 [35] solver of [36] are being used. Precisely, the latter one corresponds to the basis pursuit denoising (BPDN) problem: with the apriori regularization parameter τ . The OMP is implemented by solving the convex least-squares problem: ∥Fm − d∥ 2 2 subject to min ∥m∥ 0 ≤ E.
Here, the L 0 norm seeks a solution with a given number of nonzero entries E. In certain cases, this apriori enforced sparsity can be critical, because it limits the number of defects which can be resolved or it limits the spatial resolution of a formation of multiple defects.
Kirchhoff-migration
The Kirchhoff operator in equation (8) is realized by using an analytical Green's function: with the phase velocity c(ω) of a chosen wave mode and without acknowledging any boundary reflections. Therefore the operator only contains information of direct arrivals and treats other modes or multipath contributions as noise. The propagation is assumed to be two-dimensional, with an associated cylindrical geometrical spreading rate, because guided elastic waves in plate-like structures show similar characteristics. For the comparison with different methods, the imaging output I(x) is given by the squared absolute value of the calculated scattering intensities m or m l , respectively. Note that, especially in geophysical literature, it is common to map the real part of m. Also the used source wavelet is transformed into zero phase beforehand.
Reverse-time-migration
For reasons of better comparison with the other two operators and in contrast to classical numerical formulations, the RTM operator in this work is realized by a series of semi-analytical expressions. Each one acknowledging a specific side-wall reflection of certain order, minimizing numerical influences in the benchmark. For each source position s a reflection at each side wall, additional Green's functions are defined, with virtual source positions constructed by the method of images and an algebraic sign swap. By a subsequent recursion scheme multiple side-wall reflections can be added as well. In line with Kirchhoff-Migration, the imaging output is again given by the squared absolute value of the scattering intensities m or m l .
Delay-multiply-and-sum
Delay-and-Sum-type beamformers constitute an important class of techniques that have already been employed early [8] in the development of guided-wave damage imaging. An improved variant of Delay-and-Sum that is named Delay-Multiply-and-Sum (DMAS) emerged in the field of microwave-based breast-cancer detection [37] and has since been implemented for ground-penetrating radar [38], medical ultrasonics [39] as well as photoacoustic microscopy [40]. In the present article, DMAS is introduced for Lamb-wave damage imaging. It here serves as a state-of-the-art benchmark for the other presented techniques. The ability to inspect visually structural defects calls for an image formation process which provides a intensity map I(x) based on differential signals u s ij (t). In the DAS framework, the contribution originating from sender i and receiver j (out of the S senders and R receivers) reads: where temporal integration is carried out over a discrete window T and where furthermore D ij (x) indicates a round-trip delay. Here, D ij (x) = (|s i − x| + |r j − x|)/v is derived from the group velocity v of the chosen mode and excited centroid frequency, the distance |s i − x| from sender i to position x as well as the distance |r j − x| from this point to the receiver j. Moreover, pre-processed signalsũ ij (t) are considered: the differential signals u s ij (t) are detrended, smoothed (moving-average filter and zeroizing amplitudes below the noise threshold), also aligned in order to correct for phase jitter and moreover distance-gated with respect to the chosen mode. DMAS introduces usage of the products of the signals where all possible signal combinations are involved: The introduced multiplications are either interpreted as correlations [38] or geometric means [39,40]. Formally, ∑ T qũ ij · u mn represents the unnormalized Pearson correlation coefficient ofũ ij andũ mn which accordingly causes that coherent components in both signals contribute stronger to the image intensity I than incoherent signals. The productũ s ij ·ũ mn = (√ũ ij ·ũ mn ) 2 can also be interpreted as the squared geometric mean of both signals where squaring omits the problem associated with possibly negative signs of the products. Instead of S · R signals in the case of Delay-And-Sum, DMAS hence makes use of an increased number of 'mean' signals, namely (S · R) 2 . [41] suggested a non-linear aggregation of the aforementioned individual sender-receiver contributions, which is utilized also in the present work. Instead of linear superposition , a medianbased fusion is used I(x) = median ijmn (I ijmn (x)). and a thickness of 1.5 mm was placed in a climatic chamber in which controlled temperature conditions are present. Twelve circular piezoelectric transducers with a diameter of 10 mm and a thickness of 0.2 mm were used. The exact positions of the transducers are listed in table 1. Each actuator-sensor pair is recorded at room temperature with a multiplexing unit described in [42]. The excitation signal in the experiment is a Hann-windowed tone-burst with a carrier frequency of 70, 100, 200, 300, 400 and 500 kHz. Damages in the form of through holes with a diameter of 5 mm were inserted at locations listed in table 2. This arrangement implies measurement data from a single damage as well as two and three simultaneous defects of the same kind at different locations, probing if the imaging algorithms are able to discriminate between those adjacent scatterers.
Benchmark parameters
Calculations were performed for all three damage configurations, the three operators, six excitation frequencies, up to four solving methods and various spatial sampling rates dx. The sampling of 10 mm are used for the sparse OMP and BPDN solvers, whereas 2.5 mm used in the other cases. Since for DMAS the additional pre-processing is essential, those calculations were carried out for the other two operators as well. For the OMP solver the maximum number of damages was limited E ≤ 5 and for the BPDN approach the regularization parameter was chosen as τ ≤ 2. The semi-analytical RTM operator included side wall reflections up to the second order. Finally all imaging results were normalized afterwards and the following benchmark parameters were extracted: peak-signal-tonoise-ratio (PSNR), maximum value of the background noise: , standard deviation σ(x b ) of the background noise and the maximum peak values: max ( ) of each single hole in a damage formation inside the associated so-called trust region. Therefore, a circular region with a diameter of 3.5 cm was defined around the center of each hole. All values inside the regions x d k were assigned to the respective hole k and all N b values outside were declared as background x b . A single hole k is detected, if its maximum peak value is higher than the background peak. And a whole damage formation is correctly detected ('✓'), if this is true for all holes. Otherwise the formation is only partially ('(✓)') or not detected ('') at all. For the normalized results the logarithmic scaled PSNR is defined as: with the perfect reconstruction K(x) for a given spatial sampling rate. Here, K(x) acts similar to an indicator function which equals 1 exactly at the damage position and is zero elsewhere. Note that the square-root of I(x) is used, since the expression was already squared by the associated imaging function.
Damage localization for single damage
All the results concerning the detection success are provided in table 3
Damage localization for multiple simultaneous defects
Considering here multi-damage scenarios, the following depictions are presented: In figure 7 the conventional/adjoint solutions at 200 kHz for all three operators are drawn. Keeping the same frequency, the results concerning the least-squares and the sparse solutions are given in figure 8. Additionally, in figure 9 the PSNR is depicted for all studied frequencies. Furthermore, detection quality among the methods is compared in figure 10.
Discussion
All tested methods, i.e. operator-solver combinations were in general able to detect damage, especially given a suitable inspections frequency selection (see table 3). From table 3 it can also be observed that the single defect could be detect with greater success, likely because a single defect approximately fulfills a Born-type assumption. In general, the methods were also able to clearly separate multiple defects at least at one of the studied central frequencies. For example, in the three-hole scenario RTM in conjunction with LSM was able to detect all defects only at 200 kHz, exhibiting the poorest performance in this regard among the tested methods and in terms of the studied metric. Form the extracted PSFs in figure 5 the increasing resolution by applying L p -Solvers can be obtained for all methods. The resolution of RTM and Kirchhoff are comparable, whereas DMAS showed significantly reduced oscillations epically along the vertical axis. This circumstance can also be obtained from figure 3. Remaining imaging artefacts around the damage in figures 3 and 4 are assumed mainly to cohere from the limited amount of sensors and modelling errors. Precisely an inaccurate velocity model and the simplification due to the chosen Helmholtz equation, implicitly neglecting polarization effects and mode conversion. With additional holes in figures 7 and 8 the noise level increases for all operators, most likely due to effects of multiple scattering, which are not included by the underlying Born assumption.
Comparing the metrics in figures 6, 9 and 10 RTM was generally outperformed by its Kirchhoff counterpart. This circumstance, at first, contradicts the intuition that additional information, i.e. acknowledging the further illumination from the virtual sources due to the side-wall reflections, should lead to better results. However, the noise in the data and the modelling error can outweigh this benefit. Firstly, reflections generally have longer travel paths and are therefore more sensitive to modelling errors by imperfectly assumed phase velocities or side-wall geometries. Secondly, the longer travel path inherently leads commonly to lower amplitudes due to damping, which in turn decreases the signal-to-noise ratio. Thirdly, especially for the considered sensor arrangement, there is no portion of the plate which is not significantly illuminated by first arrivals.
In contrast, the advantage of sparse solvers consists in a significantly improved signal-to-noise ratio. In figure 6 the OMP and BPDN based approaches consistently provide a higher PSNR than the other implemented approaches, particularly for 200 and 300 kHz. The fact that the performance varies with excitation frequency can rely on different factors, two important ones being the following reasons: (i) The applied operators can reproduce the experimental signals well, if the tuned excitation behaviour of Lamb waves [43] is captured well by the operator. For the given experimental setup, in the range of 200 and 300 kHz Lamb waves exhibit a pronounced S 0 mode which is reproduced successfully by all the studied operators. And (ii) the wavelength of the actuated waves changes with frequency. Assuming Lamb waves with pure S 0 mode, then the wavelength will drop with increasing frequency leading, at least in theory, to an enhanced resolution of smaller defects. The improved resolution of individual defects might be responsible for the good performance observed at 500 kHz in the multi-site damage configuration depicted in figure 9. Nonetheless, interpretation of the intricate scattering processes involving Lamb waves is complicated due to their complex propagation behaviour.
Among the sparse solutions, OMP outperforms BPDN in terms of PSNR in figures 6 and 9. DMAS shows overall similar performance like the sparse solvers, potentially in part due to the median-based filtering which facilitates clutter suppression. Finally, when looking again only at the most promising centroid frequency of 200 kHz, from figure 10 it is evident that the Kirchhoff operator in conjunction with the OMP solver delivers the minimal maximum-noise value (black lines) and the smallest variance in noise (red lines) which both together facilitate the clearest discrimination between defect and noisy background. Especially, for the three-hole scenario the reconstructed intensity of all three damages is comparatively high. The corresponding imaging result is given in figure 8 where Kirchhoff OMP hence provides the most promising support regarding the unambiguous identification of spots for targeted maintenance.
Conclusion
In this work, damage imaging using guided ultrasonic waves is studied in an experimental setting comprising an aluminum plate with single and multi-site damage in form of throughholes. Several model-based imaging algorithms are introduced in a unifying framework. Those methods are compared based on different figures-of-merit. The results are discussed in light of theory-based arguments. Due to the consistent formulation of the problem based on scattering theory, it is possible that the imaged intensity of the scatterer is correlated with e.g. damage size. Therefore, the presented methods could be utilized in the future in a gauged SHM system in order to estimate damage size or damage severity. Additionally the possibility to approximate Greens function could include more complex geometrical features like nozzles, varying thickness or bended parts. Moreover, through deployment of iterative procedures and by taking into account higher-order scattering processes, it is a formidable open question for future research, whether the resolution could be further improved in the case of multisite damage. | 2020-10-29T09:08:57.580Z | 2020-10-23T00:00:00.000 | {
"year": 2020,
"sha1": "600191af2405228847ee2453f2917847ad45776d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1361-665x/abba6d",
"oa_status": "HYBRID",
"pdf_src": "IOP",
"pdf_hash": "a7d61b68eeb926e34c95e9e9e4a738dbf5104fb4",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
253869914 | pes2o/s2orc | v3-fos-license | Investigation and Research on the Inheritance and Development of Kangding Liuliu Tune
(cid:14)(cid:131)(cid:144)(cid:137)(cid:134)(cid:139)(cid:144)(cid:137)(cid:3)(cid:15)(cid:139)(cid:151)(cid:142)(cid:139)(cid:151)(cid:3)(cid:23)(cid:151)(cid:144)(cid:135)(cid:3)(cid:139)(cid:149)(cid:3)(cid:150)(cid:138)(cid:135)(cid:3)(cid:133)(cid:148)(cid:135)(cid:131)(cid:150)(cid:139)(cid:152)(cid:135)(cid:3)(cid:146)(cid:148)(cid:145)(cid:150)(cid:145)(cid:150)(cid:155)(cid:146)(cid:135)(cid:3)(cid:145)(cid:136)(cid:3)(cid:150)(cid:138)(cid:135)(cid:3)(cid:153)(cid:145)(cid:148)(cid:142)(cid:134)(cid:821)(cid:149)(cid:3)(cid:150)(cid:145)(cid:146)(cid:3)(cid:150)(cid:135)(cid:144)(cid:3)(cid:142)(cid:145)(cid:152)(cid:135)(cid:3)(cid:149)(cid:145)(cid:144)(cid:137)(cid:149)(cid:3)(cid:822)(cid:14)(cid:131)(cid:144)(cid:137)(cid:134)(cid:139)(cid:144)(cid:137)(cid:3)(cid:15)(cid:145)(cid:152)(cid:135)(cid:3)(cid:22)(cid:145)(cid:144)(cid:137)(cid:149)(cid:822)(cid:481)(cid:3)(cid:131)(cid:144)(cid:134)(cid:3)(cid:139)(cid:150)(cid:3)(cid:139)(cid:149)(cid:3)(cid:131)(cid:142)(cid:149)(cid:145)(cid:3)(cid:131)(cid:3)(cid:148)(cid:135)(cid:146)(cid:148)(cid:135)(cid:149)(cid:135)(cid:144)(cid:150)(cid:131)(cid:150)(cid:139)(cid:152)(cid:135)(cid:3)(cid:11)(cid:131)(cid:144)(cid:3)(cid:136)(cid:145)(cid:142)(cid:141)(cid:3)(cid:149)(cid:145)(cid:144)(cid:137)(cid:3)(cid:133)(cid:139)(cid:148)(cid:133)(cid:151)(cid:142)(cid:131)(cid:150)(cid:139)(cid:144)(cid:137)(cid:3)(cid:139)(cid:144)(cid:3)(cid:150)(cid:138)(cid:135)(cid:3)(cid:143)(cid:139)(cid:144)(cid:145)(cid:148)(cid:139)(cid:150)(cid:155)(cid:3)(cid:131)(cid:148)(cid:135)(cid:131)(cid:149)(cid:3)(cid:145)(cid:136)(cid:3)
"absorbing and resurrecting love; love that stands the test of time and even unites different generations. Most frequently, this love bears a tragic connotation (e.g. harsh times or life circumstances separate the beloved) but, like in I. Bunin's stories, which L. Rzhevsky greatly appreciated, love is ingrained in the narrator's memory (all the writer's works are first person), inevitably preventing personality destruction or even bringing back to life not only the narrator, but also secondary characters, including those recently being cynics and voluptuaries" [1].
Criticism of the Russian emigrant literature observed the tradition of Bunin in the work of L.D. Rzhevsky both at the level of form and at the content-literary level. R.B. Gul believed that it was exactly L.D. Rzhevsky to be destined to revive the true sensuality and beauty of eroticism in Russian literature: "The theme of sensual love proves a failure in Russian prose. It is not a Russian theme. Having turned 70, Bunin tried to fill this gap in Russian prose and created the erotic "Dark Avenues". However, his attempt was not entirely successful." L. Rzhevsky, in his turn, considered I. Bunin one of the genius Russian writers and believed "Dark Avenues" to be an example of "Russian literary eroticism" [2].
Let us turn to L. Rzhevsky's novel "The Sunflower in Revolt" which contains 17 chapters. It is about the Russian intelligentsia, emigration and the literature of a "home" in exile. The title originates from a poem by I. Drach, The Ballad of a Sunflower. The palimpsest technique, so characteristic of L. Rzhevsky, expands to the size of a large-scale philosophical allusion: "sunflower" is what one character, a debater and nonconformist Sergei Sergeevich, calls the other, the writer Dima, "for his narcissism".
The author skilfully projects individual destinies on a universal scale, while the mosaic composition of the novel only contributes to the expansion of its sociophilosophical area. The thematic paradigm of the novel is presented by traditional for L. Rzhevsky themes: homeland, life and death, love, creativity, memory, historical mission, suffering and redemption.
Space is one of the key elements that form the character's vision of the environment and one of the ways to create the author's model of the world. Another element that plays an important role is the category of time, which is closely connected with spatial layers. This synthesis is commonly referred to as "chronotope". Chronotope, as defined by M.M. Bakhtin, is the intrinsic connectedness of temporal and spatial relationships that are artistically expressed in literature. "In the literary artistic chronotope, spatial and temporal indicators are fused into one carefully thought-out, concrete whole. Time, as it were, thickens and becomes artistically visible; likewise, space becomes charged and responsive to the movements of time, plot and history. This intersection of axes and fusion of indicators characterizes the artistic chronotope" [3].
Time is inseparable from space. However, unlike space that acquires meaning in being filled with sacred objects, time predetermines the possibility of the formation and structural organization of space.
In general, time is one of the main forms of the existence of the world, the emergence, formation, development, and destruction of any phenomena of being. The categories of time are associated with the sequence of stages in nature, human life and the development of consciousness.
Spatio-temporal representations served as a means of image generalization of life phenomena, while retaining their objective foundation (we mean the "stopping: of time, its "stretching" and "compression", the overlap of temporal layers). However, we may refer to the connection of spatio-temporal elements in the structure of a literary work as a single whole. The reproduction of spatio-temporal relations by the realist writers of the turn of the century suggests a certain typology, even despite their noticeable creative individuality. There were no extremes or subjective-mystical interpretation of time and space, but an emotional-aesthetic variety of solutions to these relations was present. Flexibility in the perception of space enhanced the dynamism of action (this also marked Chekhov's later works). The spatial concepts of "distance", "space", "road", while preserving all their reality, are also considered as symbols (Bunin's prose). Bunin's concepts of time and space, as we have already mentioned, are historic, material, and visible, but it is the author who connects times, epochs, civilizations, peoples, and generations, looking at what happened or is happening from the distance of centuries and spaces.
MEMORY IS THE HIGHEST MEASURE OF HUMANITY
Philosophically, I. Bunin makes the post-climax part of Tanya and Muza equal to the pre-climax part of In Paris and Natalie. Plot structures of stories with differently located compositional climax may be similar. Thus, short stories Tanya and Natalie have a spiral plot, while Muza and In Paris -a linear one.
These structural differences only go to emphasize that "all roads lead to Rome": people reach life outcomes in different ways, and to all of them it seems (just seems!) that the path lies through happiness. Let us turn to L. Rzhevsky's novel "The Sunflower in Revolt" which contains 17 chapters. It is about the Russian intelligentsia, emigration and the literature of a "home" in exile. The title originates from a poem by I. Drach, The Ballad of a Sunflower. The palimpsest technique, so characteristic of L. Rzhevsky, expands to the size of a large-scale philosophical allusion: "sunflower" is what one character, a debater and nonconformist Sergei Sergeevich, calls the other, the writer Dima, "for his narcissism".
The author skilfully projects individual destinies on a universal scale, while the mosaic composition of the novel only contributes to the expansion of its sociophilosophical area. The thematic paradigm of the novel is presented by traditional for L. Rzhevsky themes: homeland, life and death, love, creativity, memory, historical mission, suffering and redemption.
Space is one of the key elements that form the character's vision of the environment and one of the ways to create the author's model of the world. Another element that plays an important role is the category of time, which is closely connected with spatial layers. This synthesis is commonly referred to as "chronotope".
Chronotope, as defined by M.M. Bakhtin, is the intrinsic connectedness of temporal and spatial relationships that are artistically expressed in literature. "In the literary artistic chronotope, spatial and temporal indicators are fused into one carefully thought-out, concrete whole. Time, as it were, thickens and becomes artistically visible; likewise, space becomes charged and responsive to the movements of time, plot and history. This intersection of axes and fusion of indicators characterizes the artistic chronotope" [3].
Time is inseparable from space. However, unlike space that acquires meaning in being filled with sacred objects, time predetermines the possibility of the formation and structural organization of space.
In general, time is one of the main forms of the existence of the world, the emergence, formation, development, and destruction of any phenomena of being. The categories of time are associated with the sequence of stages in nature, human life and the development of consciousness.
Spatio-temporal representations served as a means of image generalization of life phenomena, while retaining their objective foundation (we mean the "stopping: of time, its "stretching" and "compression", the overlap of temporal layers). However, we may refer to the connection of spatio-temporal elements in the structure of a literary work as a single whole. The reproduction of spatio-temporal relations by the realist writers of the turn of the century suggests a certain typology, even despite their noticeable creative individuality. There were no extremes or subjective-mystical interpretation of time and space, but an emotional-aesthetic variety of solutions to these relations was present. Flexibility in the perception of space enhanced the dynamism of action (this also marked Chekhov's later works). The spatial concepts of "distance", "space", "road", while preserving all their reality, are also considered as symbols (Bunin's prose). Bunin's concepts of time and space, as we have already mentioned, are historic, material, and visible, but it is the author who connects times, epochs, civilizations, peoples, and generations, looking at what happened or is happening from the distance of centuries and spaces.
MEMORY IS THE HIGHEST MEASURE OF HUMANITY
Philosophically, I. Bunin makes the post-climax part of Tanya and Muza equal to the pre-climax part of In Paris and Natalie. Plot structures of stories with differently located compositional climax may be similar. Thus, short stories Tanya and Natalie have a spiral plot, while Muza and In Paris -a linear one.
These structural differences only go to emphasize that "all roads lead to Rome": people reach life outcomes in different ways, and to all of them it seems (just seems!) that the path lies through happiness. Athena Transactions in Social Sciences and Humanities, Volume 1 Proceedings of the 8th International Conference on Arts, Design and Contemporary Education (ICADCE 2022) "absorbing and resurrecting love; love that stands the test of time and even unites different generations. Most frequently, this love bears a tragic connotation (e.g. harsh times or life circumstances separate the beloved) but, like in I. Bunin's stories, which L. Rzhevsky greatly appreciated, love is ingrained in the narrator's memory (all the writer's works are first person), inevitably preventing personality destruction or even bringing back to life not only the narrator, but also secondary characters, including those recently being cynics and voluptuaries" [1].
Criticism of the Russian emigrant literature observed the tradition of Bunin in the work of L.D. Rzhevsky both at the level of form and at the content-literary level. R.B. Gul believed that it was exactly L.D. Rzhevsky to be destined to revive the true sensuality and beauty of eroticism in Russian literature: "The theme of sensual love proves a failure in Russian prose. It is not a Russian theme. Having turned 70, Bunin tried to fill this gap in Russian prose and created the erotic "Dark Avenues". However, his attempt was not entirely successful." L. Rzhevsky, in his turn, considered I. Bunin one of the genius Russian writers and believed "Dark Avenues" to be an example of "Russian literary eroticism" [2].
Let us turn to L. Rzhevsky's novel "The Sunflower in Revolt" which contains 17 chapters. It is about the Russian intelligentsia, emigration and the literature of a "home" in exile. The title originates from a poem by I. Drach, The Ballad of a Sunflower. The palimpsest technique, so characteristic of L. Rzhevsky, expands to the size of a large-scale philosophical allusion: "sunflower" is what one character, a debater and nonconformist Sergei Sergeevich, calls the other, the writer Dima, "for his narcissism".
The author skilfully projects individual destinies on a universal scale, while the mosaic composition of the novel only contributes to the expansion of its sociophilosophical area. The thematic paradigm of the novel is presented by traditional for L. Rzhevsky themes: homeland, life and death, love, creativity, memory, historical mission, suffering and redemption.
Space is one of the key elements that form the character's vision of the environment and one of the ways to create the author's model of the world. Another element that plays an important role is the category of time, which is closely connected with spatial layers. This synthesis is commonly referred to as "chronotope". Chronotope, as defined by M.M. Bakhtin, is the intrinsic connectedness of temporal and spatial relationships that are artistically expressed in literature. "In the literary artistic chronotope, spatial and temporal indicators are fused into one carefully thought-out, concrete whole. Time, as it were, thickens and becomes artistically visible; likewise, space becomes charged and responsive to the movements of time, plot and history. This intersection of axes and fusion of indicators characterizes the artistic chronotope" [3].
Time is inseparable from space. However, unlike space that acquires meaning in being filled with sacred objects, time predetermines the possibility of the formation and structural organization of space.
In general, time is one of the main forms of the existence of the world, the emergence, formation, development, and destruction of any phenomena of being. The categories of time are associated with the sequence of stages in nature, human life and the development of consciousness.
Spatio-temporal representations served as a means of image generalization of life phenomena, while retaining their objective foundation (we mean the "stopping: of time, its "stretching" and "compression", the overlap of temporal layers). However, we may refer to the connection of spatio-temporal elements in the structure of a literary work as a single whole. The reproduction of spatio-temporal relations by the realist writers of the turn of the century suggests a certain typology, even despite their noticeable creative individuality. There were no extremes or subjective-mystical interpretation of time and space, but an emotional-aesthetic variety of solutions to these relations was present. Flexibility in the perception of space enhanced the dynamism of action (this also marked Chekhov's later works). The spatial concepts of "distance", "space", "road", while preserving all their reality, are also considered as symbols (Bunin's prose). Bunin's concepts of time and space, as we have already mentioned, are historic, material, and visible, but it is the author who connects times, epochs, civilizations, peoples, and generations, looking at what happened or is happening from the distance of centuries and spaces.
MEMORY IS THE HIGHEST MEASURE OF HUMANITY
Philosophically, I. Bunin makes the post-climax part of Tanya and Muza equal to the pre-climax part of In Paris and Natalie. Plot structures of stories with differently located compositional climax may be similar. Thus, short stories Tanya and Natalie have a spiral plot, while Muza and In Paris -a linear one.
These structural differences only go to emphasize that "all roads lead to Rome": people reach life outcomes in different ways, and to all of them it seems (just seems!) that the path lies through happiness. Athena Transactions in Social Sciences and Humanities, Volume 1 Proceedings of the 8th International Conference on Arts, Design and Contemporary Education (ICADCE 2022) "absorbing and resurrecting love; love that stands the test of time and even unites different generations. Most frequently, this love bears a tragic connotation (e.g. harsh times or life circumstances separate the beloved) but, like in I. Bunin's stories, which L. Rzhevsky greatly appreciated, love is ingrained in the narrator's memory (all the writer's works are first person), inevitably preventing personality destruction or even bringing back to life not only the narrator, but also secondary characters, including those recently being cynics and voluptuaries" [1].
Criticism of the Russian emigrant literature observed the tradition of Bunin in the work of L.D. Rzhevsky both at the level of form and at the content-literary level. R.B. Gul believed that it was exactly L.D. Rzhevsky to be destined to revive the true sensuality and beauty of eroticism in Russian literature: "The theme of sensual love proves a failure in Russian prose. It is not a Russian theme. Having turned 70, Bunin tried to fill this gap in Russian prose and created the erotic "Dark Avenues". However, his attempt was not entirely successful." L. Rzhevsky, in his turn, considered I. Bunin one of the genius Russian writers and believed "Dark Avenues" to be an example of "Russian literary eroticism" [2].
Let us turn to L. Rzhevsky's novel "The Sunflower in Revolt" which contains 17 chapters. It is about the Russian intelligentsia, emigration and the literature of a "home" in exile. The title originates from a poem by I. Drach, The Ballad of a Sunflower. The palimpsest technique, so characteristic of L. Rzhevsky, expands to the size of a large-scale philosophical allusion: "sunflower" is what one character, a debater and nonconformist Sergei Sergeevich, calls the other, the writer Dima, "for his narcissism".
The author skilfully projects individual destinies on a universal scale, while the mosaic composition of the novel only contributes to the expansion of its sociophilosophical area. The thematic paradigm of the novel is presented by traditional for L. Rzhevsky themes: homeland, life and death, love, creativity, memory, historical mission, suffering and redemption.
Space is one of the key elements that form the character's vision of the environment and one of the ways to create the author's model of the world. Another element that plays an important role is the category of time, which is closely connected with spatial layers. This synthesis is commonly referred to as "chronotope". Chronotope, as defined by M.M. Bakhtin, is the intrinsic connectedness of temporal and spatial relationships that are artistically expressed in literature. "In the literary artistic chronotope, spatial and temporal indicators are fused into one carefully thought-out, concrete whole. Time, as it were, thickens and becomes artistically visible; likewise, space becomes charged and responsive to the movements of time, plot and history. This intersection of axes and fusion of indicators characterizes the artistic chronotope" [3].
Time is inseparable from space. However, unlike space that acquires meaning in being filled with sacred objects, time predetermines the possibility of the formation and structural organization of space.
In general, time is one of the main forms of the existence of the world, the emergence, formation, development, and destruction of any phenomena of being. The categories of time are associated with the sequence of stages in nature, human life and the development of consciousness.
Spatio-temporal representations served as a means of image generalization of life phenomena, while retaining their objective foundation (we mean the "stopping: of time, its "stretching" and "compression", the overlap of temporal layers). However, we may refer to the connection of spatio-temporal elements in the structure of a literary work as a single whole. The reproduction of spatio-temporal relations by the realist writers of the turn of the century suggests a certain typology, even despite their noticeable creative individuality. There were no extremes or subjective-mystical interpretation of time and space, but an emotional-aesthetic variety of solutions to these relations was present. Flexibility in the perception of space enhanced the dynamism of action (this also marked Chekhov's later works). The spatial concepts of "distance", "space", "road", while preserving all their reality, are also considered as symbols (Bunin's prose). Bunin's concepts of time and space, as we have already mentioned, are historic, material, and visible, but it is the author who connects times, epochs, civilizations, peoples, and generations, looking at what happened or is happening from the distance of centuries and spaces.
MEMORY IS THE HIGHEST MEASURE OF HUMANITY
Philosophically, I. Bunin makes the post-climax part of Tanya and Muza equal to the pre-climax part of In Paris and Natalie. Plot structures of stories with differently located compositional climax may be similar. Thus, short stories Tanya and Natalie have a spiral plot, while Muza and In Paris -a linear one.
These structural differences only go to emphasize that "all roads lead to Rome": people reach life outcomes in different ways, and to all of them it seems (just seems!) that the path lies through happiness. Time is inseparable from space. However, unlike space that acquires meaning in being filled with sacred objects, time predetermines the possibility of the formation and structural organization of space.
In general, time is one of the main forms of the existence of the world, the emergence, formation, development, and destruction of any phenomena of being. The categories of time are associated with the sequence of stages in nature, human life and the development of consciousness.
Spatio-temporal representations served as a means of image generalization of life phenomena, while retaining their objective foundation (we mean the "stopping: of time, its "stretching" and "compression", the overlap of temporal layers). However, we may refer to the connection of spatio-temporal elements in the structure of a literary work as a single whole. The reproduction of spatio-temporal relations by the realist writers of the turn of the century suggests a certain typology, even despite their noticeable creative individuality. There were no extremes or subjective-mystical interpretation of time and space, but an emotional-aesthetic variety of solutions to these relations was present. Flexibility in the perception of space enhanced the dynamism of action (this also marked Chekhov's later works). The spatial concepts of "distance", "space", "road", while preserving all their reality, are also considered as symbols (Bunin's prose). Bunin's concepts of time and space, as we have already mentioned, are historic, material, and visible, but it is the author who connects times, epochs, civilizations, peoples, and generations, looking at what happened or is happening from the distance of centuries and spaces.
MEMORY IS THE HIGHEST MEASURE OF HUMANITY
Philosophically, I. Bunin makes the post-climax part of Tanya and Muza equal to the pre-climax part of In Paris and Natalie. Plot structures of stories with differently located compositional climax may be similar. Thus, short stories Tanya and Natalie have a spiral plot, while Muza and In Paris -a linear one.
These structural differences only go to emphasize that "all roads lead to Rome": people reach life outcomes in different ways, and to all of them it seems (just seems!) that the path lies through happiness. | 2022-11-25T17:01:33.082Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "cb2b68d476e7005d6923a817215ad469eea8b198",
"oa_license": "CCBYNC",
"oa_url": "https://files.athena-publishing.com/article/124.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "4525b2b7330c25bcf2d85117144bcfe83742e0fc",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
} |
232291029 | pes2o/s2orc | v3-fos-license | The history of human land use activities in the Northern Alps since the Neolithic Age. A reconstruction of vegetation and fire history in the Mangfall Mountains (Bavaria, Germany)
Millenia of sustainable, low intensity land use have formed the cultural landscapes of central Europe. Studies from the Central Alps show that mountain pastures also look back onto many thousand years of land use history. In this palynological and pedoanthracological study in the border region between Germany and Austria in the Mangfall Mountains, we aim to close the knowledge gap that exists for the German part of the Northern Alps, where no conclusive evidence for the onset of pastoral activities has been presented so far. Our results reveal strong evidence, that mountain pasture use in this region reaches back to the Iron Age at least. However, the reconstruction of vegetation and fire history indicates human interaction with the environment much earlier, starting in the Neolithic Age, where we found evidence of slash and burn activities and first occurrences of pasture indicator pollen. A rising number of mega charcoal pieces dated to the Bronze Age suggests increased slash and burn activities, possibly linked to the creation of open space for pasturing. Therefore, our results provide profound evidence of human interaction with the mountain environment, beginning in the Neolithic Age and clear evidence of mountain pasture use beginning during the Iron Age at 750 BC. Based on palynology and pedoanthracology it is, however, difficult to clearly differentiate between pasturing, hunting and other human interactions with the environment. Further archaeological studies in this area could add valuable information to our findings and shed more light onto the early history of farming activities in the Northern Alps.
Introduction
Central European landscapes are strongly shaped by millennia of human land use practices (Poschlod, 2017). Among these cultural landscapes, mountain pastures are an especially species rich and therefore valuable habitat (Chemini and Rizzoli, 2003). Owing to their remoteness and wildness high altitude areas in the European Alps though are often regarded by the general public as natural ecosystems with a shorter and less intensive history of human land use activities. More and more studies, however, reveal that human interaction with the environment and land use at high altitudes in the Alps have a much longer history than previously thought (e.g. Dietre et al., 2020;Hafner and Schwörer, 2018;Kutschera et al., 2014;Mandl, 2006;Putzer et al., 2016;Reitmaier, 2017). The use of mountain pastures enabled early farmers to expand their settlements or even produce excess food for trading purposes by reducing the pressure on the scarce agricultural land in the often-narrow alpine valleys (Reitmaier and Kruse, 2019). Seasonal livestock management is a common practice in mountain regions until today and farmers still value mountain pastures as an important part of their farming practice. Most of the studies dedicated to understand early land use history at high altitudes in the Alps are concentrated in the Central and Western Alps, and the northern fringe of the Alps, especially the German part, remains poorly investigated (Gilck and Poschlod, 2019). Very few archaeological single discoveries indicate human presence in the German Alps in prehistoric times and allow for no precise understanding of their interaction with their environment (Uenze and Katzameyer, 1972). Archaeological and archaeobotanical studies from Tyrol in Austria however show, that the northern fringe of the Alps was a region which was already frequented very early by Mesolithic and Neolithic hunters, miners, and settlers Leitner, 2003;Schumacher, 2004;Von Scheffer et al., 2019). This study therefore aims to fill this knowledge gap and investigate human interaction with the environment throughout the Holocene in the Bavarian Alps. To achieve this, we reconstructed the vegetation of the last 7500 The history of human land use activities in the Northern Alps since the Neolithic Age. A reconstruction of vegetation and fire history in the Mangfall Mountains (Bavaria, Germany) years using palynological methods to identify signs of first human impact on the vegetation. Additionally, we used charcoal analysis to reconstruct the fire history of the region. Previous studies could show that additional charcoal analysis can be a very useful tool for better understanding human impact on the environment (e.g. Gobet et al., 2003;Nelle et al., 2010;Poschlod and Baumann, 2010;Tinner et al., 2005). Especially the use of identifiable soil charcoals as indicators of local fire events provides additional and valuable information (Nelle et al., 2010). Furthermore, we used geochemical parameters, like organic matter content and C/Nratio, which gives us more information about the history of the peatland and local factors influencing the peatland. The results of our study were compared with recent climate reconstructions, which allows for a better differentiation between natural and human induced vegetation changes in our study area.
Study area
The study was carried out in a peatland situated at 1450 m a.s.l. in the lower subalpine belt of the Mangfall Mountains in the district of Miesbach in Bavaria (Germany; Figure 1). Alpine farming still plays an important role in the region and many mountain pastures in the region are used by local farmers. The complex geology of the site with a mixture of marl (Kössener Schichten), dolomite (Hauptdolomit), different sorts of limestone (Plattenkalke, Jurakalke, Kreidegesteine, Rhätkalk) and glacier deposits combined with the typical sub oceanic mountain climate of the northernmost chains of the Limestone Alps facilitated the forming of several peatlands in the surrounding (Dietmair, 2001). The area is protected by the European NATURA 2000 network and the Ramsar Convention on Wetlands of International Importance since 2007 (Faas et al., 2007). The peatland covers the whole bottom and the slopes of a Polje (Karst depression with a natural brook that vanishes into a Ponor) and is divided by the Austrian-Bavarian border. The size of the peatland is approximately seven hectares and it is part of an alpine pasture system (Bayerische Wildalm), which is still grazed by horses on the Bavarian side and by cattle on the Austrian side (Faas et al., 2007). Grazing intensity, and land use intensity in general, however decreased over the last centuries (Faas et al., 2007).
Several archaeological findings from the surrounding of the study area indicate very early human activities in the region. Research from the Rofan Mountains, only 15 km south of our sampling site, reveals proof for human activities in high mountain areas beginning in the Mesolithic Age Kompatscher and Kompatscher, 2005;Leitner et al., 2011). Beginning in the Mesolithic Age, humans reportedly began to hunt and exploit flint stone and radiolarite deposits in the Rofan Mountains at altitudes around 2000 m a.s.l. Kompatscher and Kompatscher, 2005;Leitner et al., 2011). This, together with findings from the Fotschertal (Ullafelsen) south of Innsbruck, demonstrates, that Mesolithic hunters already populated the Northern Alps (Schäfer, 1998). Excavations of stone tools of northern alpine and southern alpine origin in the Fotschertal revealed, that these tools were transported via mountain pass routes over the whole Alpine ridge from the Italian to the Bavarian Alps during the Mesolithic Age (Schäfer, 1998). The excavations in the Rofan Mountains show a continuous presence of humans throughout the Mesolithic, Neolithic, Bronze, and Iron Age. Bones from goat and sheep at approximately 2000 m a.s.l. in the Rofan Mountains, dating to the Iron Age support the hypothesis for pasture development . Further proof for the early use of mountain passes between the Inntal and the Bavarian lowlands is provided by Raetian rock inscriptions close to the Schneidjoch at 1600 m (2.5 km from our study site), which clearly prove human presence at a time around 500 BC (Schumacher, 2004). The inscriptions display raetian consecration formulas for a Father and his two sons (Schumacher, 2004). Since these inscriptions seem to have a religious context, they do not provide direct evidence for human interaction with the environment. Contrary to the Austrian side, the Bavarian side of the study area lacks substantial evidence for Mesolithic and Neolithic activities. Findings of rock engravings from another site at 1200 m a.s.l. in the vicinity of the peatland suggest human presence during the early Bronze Age (Scherm, 2012). This finding, however, has not been scientifically confirmed yet, but together with the inscriptions at the Schneidjoch indicates human presence along mountain pass routes from the Inn Valley to the German foothills of the Alps. Further evidence of prehistoric human activities on the Bavarian side of our study area is scarce and based only on single discoveries (Uenze and Katzameyer, 1972). Noteworthy is the excavation of a big ceramic vessel on the northern side of the Tegernsee, which dates to the late Bronze Age and by its size indicates settlements in the foothills of the Alps, as the transport of such a vessel seems unlikely (Heim, 2012).
Field work
On the 4th of November 2016, a fist sediment core (WA-1) was taken with a Russian-type peat corer (Belokopytov and Beresnevich, 1955) with a 6 cm wide and 50 cm long coring chamber on the Bavarian side of the peatland. A depth of 360 cm was reached. Owing to physical constraints we were not able to cover the whole depth of the mire. Therefore, another attempt was made on the 21st of September 2017, where a second sediment core (WA-2) with a length of 425 cm was taken at the identical location following the same sampling approach. Both cores were immediately wrapped in plastic and after transportation to the University of Regensburg, the cores were stored at 4°C in a cooling chamber. In the summer of 2017 two soil profiles from the direct surrounding of the peatland were taken for soil charcoal analysis. The samples were washed through a 1 mm sieve and charcoal pieces were extracted. Ten charcoal pieces were selected from a total of 165 pieces and sent to the Curt-Engelhorn-Zentrum for Archäometrie in Mannheim, Germany for radiocarbon dating.
Chronology
The age-depth model of the core is based on eight radiocarbon dates (Table 1) obtained from plant macro remains at selected depths from the two peat cores WA-1 and WA-2. The radiocarbon dates were measured with Accelerated Mass Spectrometry (AMS) at the Curt-Engelhorn-Zentrum for Archäometrie in Mannheim, Germany and calculated according to Stuiver and Polach (1977). Calibration of the 14 C-dates took place with the software SwissCal (L. Wacker, ETH-Zürich) using the IntCal13 calibration curve (Reimer et al., 2013). The package Clam (Blaauw, 2010) within the R environment v. 3.4.0 (R Core Team, 2017) was used to calculate an age-depth model ( Figure 2) based on Monte Carlo sampling with 10,000 iterations, using a smoothing spline (with a smoothing level of 0.3). According to the model, accumulation rates are very stable, varying between 12 years/cm in the upper part of the core and 27 years/cm in the lowest part of the core.
Palynological analysis
For the pollen analysis subsamples of the first 360 cm sediment core (WA-1) and the lowest part (360 cm-425 cm) of the second 425 cm sediment core (WA-2) were taken. Size of the subsamples was 1 cm³. In a first step subsamples from every 5-6 cm were selected and analysed. In a second step, after reviewing preliminary pollen data, areas of special importance were selected, and further subsamples were taken at these selected depths. In some parts of the core, samples from every 3 cm were analysed so that the total number of samples reached 96. Unfortunately, owing to the local occurrence of Lycopodium clavatum and Lycopodium annotinum we could not use Lycopodium spores for the calculation of concentrations and influx values (Stockmarr, 1971). The samples were treated following the standard acetolysis method (Faegri et al., 2000;Moore et al., 1991). After treatment with 10% HCl and 10% KOH, samples were sieved with 160 µm mesh size. Treatment with concentrated cold HF for 48 h prior to acetolysis was used for samples with high mineral content. Samples were mounted in glycerine. Pollen and spores were identified under the light microscope at 160×-1000× magnification, using a refence collection, as well as identification keys and pollen atlases (Beug, 2004;Faegri et al., 2000;Moore et al., 1991). A minimum of 350 pollen was counted per slide and a total of 94 pollen types was identified. The sum for calculation of pollen percentages includes trees, shrubs, and herbs, whereas spores and Cyperaceae pollen were excluded because of their possible local origin from the peatland. The definition of pastoral pollen indicators follows Behre (1981), Festi (2012) and Gilck and Poschlod (2019). Following the method of Poschlod and Baumann (2010), linear regression analysis between the most acknowledged and most used pasture indicator species, Plantago lanceolata and other potential indicator species was performed to identify further local indicator species (Table 3).
Geochemical analysis
For the geochemical analysis 96 subsamples with a size of 4 cm³ were taken from the exact depths where pollen analysis was conducted. A small part of these subsamples was pulverized, and total Carbon and total Nitrogen was measured by the Institute of Analytical Chemistry of the University of Regensburg. The remaining part of the subsamples was used to measure organic matter content of the peat using Loss on Ignition (LOI). Our protocol followed the recommendations of Heiri et al. (2001). The second heat treatment at 950°C for estimating carbonate content was left out, since it does not add information, relevant to our research questions. The samples were dried for 48 h at 75°C, weighed and then treated with 550°C in a furnace for 4 h, before being weighed again.
Statistical methods
Non-metric Multidimensional Scaling (NMDS) using Bray-Curtis dissimilarity was performed on the complete pollen dataset with the community ecology package vegan v. 2.4-5 (Oksanen et al., 2017). The pollen diagrams were constructed using the quaternary science package rioja v. 0.9-21 (Juggins, 2019). CONISS, temporally constrained hierarchical clustering (Grimm, 1987) based on Euclidean vegetation dissimilarity was used to estimate stratigraphic zones in the pollen data. The broken stick method (Bennett, 1996) was used to evaluate the number of significant stratigraphic clusters. All analyses were performed in the R environment v. 3.4.0 (R Core Team, 2017).
Pollen analysis
The NMDS graph partly reflects the chronology of cultural epochs and shows a correlation of the pollen spectrum with time ( Figure 3). Generally, older samples from the Neolithic Age and the Bronze Age are located in the lower part of the diagram, whereas more recent samples from the Iron Age, Middle Age or Modern Times are situated more in the upper part.
Pollen samples dating to the early Neolithic Age form a distinct group in the lower left part of the graph and are characterized by tree species like Ulmus, Corylus avellana, and Tilia, together with aquatic and semi-aquatic plants like Potamogeton and Sparganium. Pollen samples assigned to the Neolithic Age show a wide distribution but are also located in the lower part of the graph. These samples are characterized by coniferous forest species like Picea abies, Abies alba, and Pinus. The amount of charcoal and Arboreal Pollen (AP) is also correlated to these samples. The Bronze Age samples exhibit a denser pattern and are located center-right in the NMDS graph. They are characterized by Picea abies, Abies alba and Fagus sylvatica. Samples from the Iron Age are widely distributed over the right side, many of them tending to the upper part of the graph, which correlates to pasture indicator species (e.g. Plantago lanceolata) and Non-Arboreal Pollen (NAP) such as Cyperaceae, Cereal type and Poaceae. Samples assigned to the Roman and especially the Migration Period are again located lower in the graph and are therefore characterized by typical closed forest species (Picea abies, Abies alba, Fagus sylvatica). The Middle Age and Modern Time pollen samples are clearly differentiated in the upper part of the graph and are characterized by Cyperaceae, Poaceae, Plantago lanceolata and Cereal type pollen. Additionally, Non-Arboreal Pollen are correlated to these samples.
The Broken Stick Model based on the CONISS cluster analysis identified two main pollen zones, which are subdivided to a total of eight significant subzones ( Figure 4). The first main pollen zone, LPAZ 1 ranges from 5500-2100 BC and is characterized by a high abundance of tree pollen (>80%), with strongly changing proportions of the different taxa over time. LPAZ 2 is characterized by a drop in tree pollen and increases in pollen from herbaceous plant species, especially Poaceae and pasture indicator species. In the following a brief description of the different subzones is given.
LPAZ 1a: 5500-4200 BC, early Neolithic Age
In the lowest section of the core, Arboreal Pollen largely dominates the pollen spectrum with a share of more than 85% ( Figure 4). Among them the most common pollen types are Picea abies, Pinus and Corylus avellana with appr. 20% each. Other common Arboreal Pollen types in this zone are Alnus, Betula, Quercus, and Tilia. The pollen spectrum of herbaceous species consists of aquatic or semi-aquatic plants like Potamogeton and Sparganium but also includes open landscape indicator species like Cichorioidae, Apiaceae, and Ranunculus acris type. Pasture indicator pollen are represented by Artemisia, Chenopodiaceae, Senecio type and Campanula type. The amount of micro charcoal is high in this zone with up to 0.75 charcoal fragments/pollen.
LPAZ 1b: 4200-3400 BC, middle Neolithic Age
Arboreal Pollen values remain high, or even increase to values between 85-95%. Compared to LPAZ 1a, the composition of the Arboreal Pollen taxa changes substantially. Picea abies increases up to 40% of the pollen sum, whereas the amount of pollen from Pinus and Corylus avellana decreases. Abies alba and Fagus sylvatica pollen show an increase, whereby the curve of Fagus sylvatica seems to react slightly delayed. Alnus, Betula, and Quercus do not change much, however values of Ulmus and Tilia decrease. Interestingly Poaceae pollen values show a slight increase in this zone, even though Arboreal Pollen are increasing their dominance. Open landscape indicators and pasture indicators decrease
LPAZ 2e: since 1350 AD -modern period
In the most recent section of the core the share of Arboreal Pollen varies strongly with two minima of 59% around 1500 AD and around 1900 AD. In between of these two minima the share of Arboreal Pollen reaches values above 80%. These strong changes are reflected in the curves of most arboreal species as well. Especially pollen of Pinus and Picea abies vary strongly with a generally slightly increasing trend for both. Abies, Fagus and Quercus are decreasing and Juglans reaches its highest share over the whole core at 0.6% around 1400 AD. The Poaceae curve fluctuates strongly, opposite to the curve of Arboreal Pollen and reaches its maximum at 25% around 1900 AD. Among the pasture indicator pollen, Plantago lanceolata, Rumex and Senecio type have a continuous occurrence
Soil charcoal analysis
The analysis of 165 soil charcoals fragments from two soil profiles close to the peatland reveals evidence for three major fire events. The oldest one dates to the Neolithic Age with nearly 5000 BC consisting of two Gymnosperm charcoal pieces ( Figure 5 and Table 2). Another four pieces date to the early Bronze Age between 1500-1800 BC. They consist of two pieces of Picea/ Larix, one piece of Abies alba and one piece of Gymnosperm charcoal. The last fire event dates to the Middle Age around 1100-1400 AD and consists of two fragments of Pinus, one fragment of Picea/Larix and one fragment of Acer sp.
Geochemistry
Organic matter content, measured with the Loss on Ignition method (LOI), varies greatly throughout the whole peat core (Figures 4 and 5). At the beginning, in the deepest and oldest part, it is very low with 18% but rises continuously to a maximum of 91% at 3800 BC. It decreases again quickly and reaches a low point at 3100 BC with 28%. Over the course of the next 1500 years it rises slowly and reaches 92% around 1600 BC. It remains very high, around 90%, until 50 AD, when a phase with several short minima begins with an initial minimum of 40% at 50 AD. At 300 AD, there is another very short minimum of 34% before recovering back to values around 90%. The biggest minimum is centered around 1000 AD, with 24% at 1150 AD and a subsequent rise to 88% at 1300 AD. Around 1600 AD the organic matter content falls to another local minimum of 50% before it reaches its maximum value of 95% in the uppermost part of the core. The various minima in organic matter content, especially in the younger part of the core were also visible with the bare eye as grey sandy/silty bands in the otherwise brown/black peat. C/N-ratio starts with low values around 15 in the deepest and oldest part of the core (Figure 4). Right before the transition from LPAZ 1a to LPAZ 1b at 4200 BC it shortly rises to 27, before decreasing to around 20, where the C/N ratio remains low between 17 and 21 for 3500 years until 700 BC. At the transition between LPAZ 2b and LPAZ 2c the C/N ratio rises quickly to maximum values of 38 around 400 BC. There is a general decreasing trend with a few ups and downs for the next 800 years, with minimum values of 16. After that, the ratio increases again strongly, reaching its maximum of 50 at 1700 AD and then decreases to values around 20 at the upper end of the core. . Micro charcoals are also weighed by the 100% pollen sum and normalized to 100%. Light-color silhouettes represent a 10-fold exaggeration of the percentage values. LOI (Loss on Ignition) shows the organic matter content of the curve in percent. Additional information about radiocarbon dated soil charcoal fragments is given, as well as high lake levels indicating phases with cooler climate according to Magny (2004) and cold phases inferred from glacier movements according to Patzelt (1977).
Pasture indicator pollen
As this research is especially aimed at identifying the begin of alpine pasturing in the Bavarian Alps, special consideration is given to the selection of pasture indicator pollen types. Pasture indicator pollen types are widely used to detect human impact on the vegetation, but only few publications are dedicated to the question of defining indicator pollen types. This causes an inconsistency in the use of pasture indicator pollen types in palynological studies across the Alps (see appendix of Gilck and Poschlod, 2019). Frequently used are the indicator pollen types defined by Behre (1981), and Oeggl (1994) for pasture indicator pollen types in an alpine environment. Festi (2012) used an experimental approach, comparing modern pollen rain with the present vegetation, to detect local pasture indicator pollen types in a montane and subalpine environment. Still, the selection of suitable indicator pollen types for our study poses difficult for several reasons. First, many typical pasture plants for subalpine and alpine pastures are summarized in large groups because morphological differentiation of the pollen to species level is difficult or impossible. Examples for these are Ranunculus type, Cichorioidae, Brassicaceae, Apiaceae, and others. This entails the problem, that these groups, besides typical pasture plants, often include species which are not typical for open pastures. Yet, many of these pollen types are used in palynological studies at subalpine and alpine elevations as pasture indicator pollen (e.g. Drescher-Schneider, 2009;Oeggl et al., 2005;Röpke et al., 2011;Wick et al., 2003; for more see appendix of Gilck and Poschlod, 2019). Because of the abovementioned problem we decided against the use of these groups as pasture indicator pollen types (with the exception of Chenopodiaceae, which in the Alps are an ecologically very homogeneous group and generally accepted as a good pasture indicator pollen type). We, however, included most of these groups in another, broader category as open landscape indicators, because a majority of the plant species in these groups are inhabitants of open landscapes and they are generally regarded as indicators of human impact in the Alps (Oeggl, 1994).
Another difficulty with the identification of pasture indicator pollen types are the different pollination systems and differences in pollen production of plants. Owing to insect pollination and low pollen production, many typical pasture indicator species are strongly underrepresented in pollen records, which makes them unsensitive indicators for small scale changes. Wind pollinated pasture species on the other hand have the disadvantage, that they are potentially of a regional source, which makes local landscape changes difficult to assess (Brun, 2011). Nevertheless, wind pollinated pasture species with a high pollen production such as Plantago, Rumex, Artemisia, or Chenopodiaceae are the most popular and most used pasture indicator species in palynological studies in the Alps because of their strong representation in pollen records (see Appendix Gilck and Poschlod, 2019). Therefore, we decided to include them into the selection of pasture indicator species in this study as well. Campanula type was included following the suggestions of Oeggl (1994) and especially Festi (2012), who found this pollen type indicative of subalpine meadows and pastures.
The regression analysis performed with the most acknowledged pasture indicator species Plantago lanceolata and other potential pasture indicators confirms the abovementioned selection of pasture indicator species (Table 3). Additionally, it shows, that Senecio type might also be a suitable pasture indicator. This group includes many plant genera typical for alpine pastures, like Antennaria, Aster, Bellis, Doronicum, Erigeron, and Homogyne. Selaginella and Ericaceae were not considered despite their correlation with Plantago lanceolata, because of their local occurrence in the peatland (Calluna vulgaris and Selaginella selaginoides are both present in the current vegetation of the peatland). Brassicaceae and Cichorioidae, which showed correlations with Plantago lanceolata as well, were excluded due the wide ecological range of their species. One shortcoming of this regression analysis, however, is that Plantago lanceolata is considered to be an archaeophyte (Kühn and Klotz, 2002). Pokorna et al. (2018) in a study in the Czech Republic for example show, that Plantago lanceolata was introduced during the late Neolithic to Early Bronze Age. This makes it an unsuitable pasture indicator species for the early Neolithic Age. This could be the reason why we could not find a correlation with Artemisia, which is strongly represented in our pollen record in the Neolithic Age ( Figure 4).
As our study site and the wider surrounding is situated well below the natural tree line, the ratio between Arboreal Pollen and Non-Arboreal Pollen is also a good indicator for human impact on the vegetation since the natural vegetation at our study site is closed forest, and climate induced tree line shifts can be ruled out in the time period covered by this study.
General patterns
The results of the NMDS give a good overview over the general patterns and major changes in the pollen composition and reveal changes on three different levels. First, the development of the local vegetation in the peatland can be observed. Aquatic plants, like Potamogeton and Sparganium type in the Early Neolithic Age indicate the presence of a shallow water lake, and high amounts of Cyperaceae pollen in the more recent samples, indicate the terrestrialisation and development of a Cyperaceae-rich peatland from the lake.
Secondly, the change in forest composition becomes apparent. The change from Ulmus, Tilia and Corylus avellana in the Early Neolithic to Picea abies, Abies alba, Pinus and increasingly Fagus sylvatica in the following epochs mirrors the general development in the forest composition during the Holocene (Küster, 2010;Magri, 2008;Poschlod, 2015Poschlod, , 2017. Third, the strong association of the old samples with Arboreal Pollen and the more recent samples with Non-Arboreal Pollen reflects the landscape changes, from closed natural forests to a more open landscape with Plantago lanceolata, Poaceae and cereal type pollen. This indicates human impact on the vegetation with deforestation followed by pasture use and crop cultivation in the lowlands. The strong association of Corylus avellana with the Neolithic Age could be an indicator of late Mesolithic land use practices. Studies from the Southern Alps and from Northern Germany showed, that Mesolithic communities possibly used fire to promote the growth of Corylus avellana as an important forage plant (Finsinger et al. 2006;Holst, 2010).
LPAZ 1a: 5500-4200 BC, early Neolithic Age
The presence of Potamogeton and Sparganium type pollen in this layer clearly suggests the presence of a (shallow water) lake at our study site during the Early Neolithic Age. The low organic matter content values further support the idea of a small lake on mineral underground. Rising levels of organic matter content toward the end of this zone show a process of terrestrialisation of the lake and the development of a mire. The strong dominance of Arboreal Pollen suggests a closed forest landscape, however the high amount of micro charcoal and the presence of dated soil charcoal pieces at the border of the peatland are strong indicators of local fire events. As already observed in other studies (e.g. Tinner et al., 2005), this fire activity changes the forest composition with decreases of fire sensitive species like Fagus sylvatica and Picea abies and increases in Corylus avellana and Pinus (most likely Pinus mugo) (Figure 4). Since natural fires in this climate zone are rare and unlikely (Carcaillet, 1998;Müller et al., 2013) and climate reconstructions show a cooling phase until around 5000 BC ( Figure 5, Magny, 2004;Patzelt, 1977), the high amount of micro-and soil charcoal could be explained by human slash and burn activities in the vicinity of the peatland. Especially the dated soil charcoals from our study site at nearly 5000 BC ( Figure 5) are strong evidence for local fires. Mesolithic or Neolithic hunters could have populated the area and burned the forest in the surrounding of the peatland for hunting purposes. This theory is further supported by archaeological proof of Mesolithic hunting and mining activities in the nearby Rofan Mountains Kompatscher and Kompatscher, 2005;Leitner et al., 2011) and Mesolithic movements of humans across alpine pass routes (Schäfer, 1998). Another indicator for local human disturbance is the strong presence of open landscape and pasture indicator pollen types for this time (Figure 4). Especially the presence of pollen from insect pollinated plants (Senecio type, Campanula type, Apiaceae, Cichorioidae, Ranunculus acris type) suggests the local establishment of open meadow patches after disturbance by fire. These results could therefore also indicate the use of summer pastures by Neolithic settlers. Studies from the Central Alps show, that Neolithic settlers already used mountain pastures around 4500 BC with small ruminants like sheep and goat (Hafner and Schwörer, 2018;Kutschera et al., 2014). The area of the forming peatland might have been a very attractive site for early herders or hunters as it provided a naturally open area and drinking water for the animals.
LPAZ 1b: 4200-3400 BC, middle Neolithic Age
This zone is characterized by a strong increase in organic matter content, together with a strong peak in Equisetum spores, which indicates the progressing terrestrialisation and peat formation. The begin of this zone at 4200 BC coincides with the begin of the Rotmoos I glacier advances (Patzelt, 1977), and high lake levels (Magny, 2004), indicating a phase of cold and moist climate. This is reflected in the pollen diagram by decreasing curves of the warm adapted species Ulmus and Tilia. This climatic deterioration might also have triggered a decrease of human activity in the region, which is supported by very low micro charcoal values, together with a decreasing frequency of pasture and open landscape indicator pollen types (Figures 4 and 5). The decreasing fire activity is also reflected in the forest composition, where fire sensitive species such as Abies, Picea and Fagus increase and pioneer species like Corylus and Pinus (Pinus mugo) show strong decreases.
LPAZ 1c: 3400-2100 BC late Neolithic Age -early Bronze Age
The transition to LPAZ 1c at 3400 BC is marked with a drop in both, organic matter content and Equisetum spores, indicating a permanent or periodical flooding of the peatland. As the peat accumulation rate remains stable it can be assumed, that periodical flooding of the growing peatland caused the influx of mineral material into the peat. This flooding could be caused by different drivers. The Rotmoos II glacier advances (Patzelt, 1977) and high lake levels (Magny, 2004) suggest, that a climatic deterioration with higher precipitation, increased snowfall in winter and periodical flooding of the mire during snowmelt might have led to this development. This scenario gains credibility by the fact, that in recent years in cases of strong snow melt, the peatland was also temporarily flooded (Faas et al., 2007). Deforestation for hunting purposes on the slopes around the peatland could also have led to an increased water runoff, facilitating more frequent flooding events or avalanches in winter. High micro charcoal contents, together with increasing open landscape indicator pollen types show, that increasing human activities could indeed have played a role (Figures 4 and 5). The observed increase in Filicales spores could also be a reaction to fire events as many ferns profit from fire (e.g. Pteridium aquilinum). The increased fire activity is again reflected in the forest composition. Fire sensitive species (Picea, Abies, Fagus) decline, whereas pioneer species (Pinus, Corylus, Alnus) increase. The lack of dated soil charcoal pieces and the scarcity of pasture indicator pollen for this period, however, suggests, that the source of the increased micro charcoal influx is of a rather regional origin, and not caused by local fires of farmers around the peatland.
LPAZ 2a: 2100-800 BC Bronze Age
In the Bronze Age indicators of human impact on the local vegetation become more frequent. Most importantly four out of ten dated soil charcoal pieces date to this period between 1900 and 1400 BC. This is strong evidence for local fire events, possibly linked to slash and burn forest clearings around the peatland. This is further supported by an increase in pasture indicator pollen types, which show continuous presence in this zone. The increase of Alnus pollen could also be explained by human slash and burn activities in the vicinity of the peatland, since Alnus alnobetula (=Alnus viridis) is a strong pioneer plant, which profits from open landscapes and disturbance (Ellenberg and Leuschner, 2010). Increased hunting activities, or pasture use around the mire by Bronze Age farmers could be the reason for the observed changes in the pollen spectrum. This would be in agreement with many other studies, which found evidence for first pasture use at high altitudes in the Alps, beginning in the Bronze Age (e.g. Dietre et al., 2012Dietre et al., , 2020Drescher-Schneider, 2009;Festi et al., 2014;Mandl, 2006;Putzer et al., 2016;Walsh and Mocci, 2011;Walsh et al., 2007;Wick et al., 2003). Archaeological and dendrochronological studies from several sites in Tyrol in Austria show, that the Bronze Age was also a period of intensive copper mining in the Northern Alps (Pichler et al., 2009(Pichler et al., , 2018. Alpine pasturing, therefore, could have developed or intensified for providing mining communities in the mountains with food.
LPAZ 2b: 800 BC-50 AD Iron Age -early Roman period
The strong decrease in Arboreal Pollen at the begin of LPAZ 2b at 750 BC indicates strong human impact on the vegetation around the peatland. The amount of Arboreal Pollen drops below 75%, which according to Magny et al. (2006) and Dietre et al. (2020) represents a threshold value for open landscapes. Since the peatland is situated well below the tree line, climatic factors cannot be made responsible for these changes. Simultaneous rises in Poaceae pollen and pasture indicator pollen suggest that land use and pasturing took place in the surrounding of the peatland. Especially the presence of Campanula pollen indicates local pasturing activities, as this is an underrepresented pollen type originating from local sources only (Oeggl, 1994). The low values of micro charcoals and the lack of dated soil charcoals are evidence for only small and possibly regional fire events and could indicate that the main slash and burn forest clearings in the direct surrounding of the peatland already took place during the Bronze Age ( Figure 4). The simultaneous rise in cereal pollen suggests intensified settling activities with crop cultivation in the Bavarian lowlands and the Inn valley. Some authors suggest that cereal pollen, found at higher altitudes could have been transported there also by livestock during migration to their summer pastures (Argant et al., 2006;Moe, 2014). This would add further emphasis to the suggestion of summer pastures in the direct vicinity of the peatland. Archaeological and archaeobotanical studies from the Central Alps confirm our findings by showing the begin or an intensification of alpine pasture use at higher altitudes during the Iron Age (e.g. Carrer et al., 2016;Festi et al., 2014;Haas et al., 2013;Heiss et al., 2005;Putzer, 2009). The Raethian inscriptions at the Schneidjoch at 1600 m a.s.l. close to our coring site (Schumacher, 2004) and archaeological evidence for summer farming in the Rofan Mountains above 2000 m a.s.l. ) are further proof for human activities in this area during the Iron Age and complement the results of our study very well. Another interesting feature is the sudden rise in C/N ratio at the begin of this zone after it remained very stable throughout the lower part of the core. These changes could be triggered by local disturbances in the peatland caused by intensive pasturing and local dung deposition by the grazing animals.
LPAZ 2c: 50-1000 AD Roman period -Migration period -early Middle Ages
This period is characterized by recovering Arboreal Pollen values and decreasing curves of Poaceae and pasture indicator values. This trend could be associated to the crisis and population decline during the migration period, followed by the downfall of the Roman Empire. This population decline lead to decreasing land use intensity, and since arable land and pastures were better available in the lowlands, mountain pastures became less attractive for farmers. The nearly complete absence of pasture indicators from local sources like Campanula type and Senecio type further emphasizes the decline in land use intensity. Absence of cereal pollen in the first half of this zone further demonstrates, that land use intensity also decreased strongly in the lowlands. Toward the end of this zone, in the early Middle Ages, Arboreal Pollen decreases strongly, and pasture indicators increase. This development corresponds to the increasing population density in the Middle Ages and the associated increased land use intensity (Poschlod, 2017). The first occurrence of Juglans pollen in this zone is evidence for the introduction of this species as a fruit tree by the Romans (Poschlod, 2017).
LPAZ 2d: 1000-1350 AD Middle Ages
During the High Middle Age, landscape openness reaches its maximum, which is reflected in very low Arboreal Pollen values below 60%. According to Magny et al. (2006) (Poschlod, 2017). More evidence for increased land use intensity is provided by the results of the pedoanthracological analysis. Micro charcoals have a large peak during the High Middle Ages, indicating regional forest clearings by fire, and several dated soil charcoals from the edge of the peatland are evidence for local fire clearing activities as well. Increasing Corylus and Alnus pollen suggest disturbance, probably caused by slash and burn activities. Forest clearings and intensive pasturing in the surrounding of the peatland could have increased soil erosion, and in the case of extreme weather events, strong runoff from the slopes could have washed mineral material into the peatland. This is supported by a very high amount of mineral material in the peat during the High Middle Ages (Figures 4 and 5). Also, frequent avalanches from the surrounding slopes, caused by a lack of protecting tree cover, could transport mineral material into the peatland. The strong peak of Caltha palustris pollen around 1100 AD could be a result of these changes in the peat. The high amount of mineral material could have facilitated the strong spread of the species over the whole peatland.
LPAZ 2e: since 1350 AD -modern period
The strong fluctuations of the Arboreal Pollen curve, Poaceae pollen and pasture indicator pollen in the last period between the Late Middle Ages and modern day indicate strong changes in land use intensity, locally and regionally, around the peatland. Frequent wars and climatic deteriorations (Little Ice Age) might have contributed to declines in land use intensity. However, the closed cereal pollen curve and the high pasture indicator pollen curve demonstrate, that land use was never given up entirely in the direct surrounding of the mire and in the adjacent valleys. Several peaks in micro charcoals could be signs of more forest clearing events by fire but could also originate from burning of villages and towns during for example, the Thirty-Years' War, since micro charcoal particles can travel large distances by wind.
Conclusion
Our study demonstrates that human interaction with the vegetation in the Bavarian Alps has a very long history. Pollen data and soil charcoal analysis revealed possible hunting or herding activities in the Early Neolithic Age above 1400 m a.s.l., which is a unique finding for the German Alps. Further signs of summer pasturing around the Bayerische Wildalm appear in the Bronze Age, with evidence for local fire clearings and subsequent occurrences of pasture indicator pollen. Very strong evidence for summer pasturing is found beginning in the Iron Age around 800 BC. Strong decreases in Arboreal Pollen, increases in pasture indicator pollen and archaeological data from the surrounding suggest intensified land use activities in the area for that time. Further changes in land use intensity, linked to population fluctuations (decline in the Migration Period, increase during the Middle Ages) are reflected very well in the pollen curves and the charcoal data. This illustrates that our multi proxy approach, using soil charcoal analysis together with palynology is a reliable method to reconstruct former land use patterns and can uncover the interaction of our ancestors with nature for time periods where written sources are unavailable. support in finding a suitable study site, for their help in the field and for their enthusiastic support of this study. We would further like to thank Dr. Sara Saeedi (University of Regensburg), Dr. Oliver Nelle (Landesamt für Denkmalpflege, Baden Württemberg) and Dr. Morteza Djamali (IMBE, Marseille) for their support, methodological advice, and help with pollen identification. Many thanks also to Daniel Lenz for help with the geochemical analysis.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article. | 2020-12-31T09:05:53.806Z | 2020-12-23T00:00:00.000 | {
"year": 2020,
"sha1": "e7fffe15ae5cc1841efb7296de75cd30c8035d15",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0959683620981701",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "f80363fe5e85253e38246d5d9dfc272b5a37a902",
"s2fieldsofstudy": [
"Geography",
"Environmental Science"
],
"extfieldsofstudy": []
} |
259006907 | pes2o/s2orc | v3-fos-license | Node Selection Algorithm for Federated Learning Based on Deep Reinforcement Learning for Edge Computing in IoT
: The Internet of Things (IoT) and edge computing technologies have been rapidly developing in recent years, leading to the emergence of new challenges in privacy and security. Personal privacy and data leakage have become major concerns in IoT edge computing environments. Federated learning has been proposed as a solution to address these privacy issues, but the heterogeneity of devices in IoT edge computing environments poses a significant challenge to the implementation of federated learning. To overcome this challenge, this paper proposes a novel node selection strategy based on deep reinforcement learning to optimize federated learning in heterogeneous device IoT environments. Additionally, a metric model for IoT devices is proposed to evaluate the performance of different devices. The experimental results demonstrate that the proposed method can improve training accuracy by 30% in a heterogeneous device IoT environment.
Introduction
With the continuous development of the Internet of Things (IoT) and edge computing technology, privacy issues in edge computing for IoT have become increasingly prominent [1]. Personal privacy and data leakage are among the most prominent issues. Due to the large number of sensors and devices involved in IoT, they continuously collect and transmit various types of data, including personal identification information, geographic location information, health status information, and so on [2]. If these data are obtained by malicious individuals, it could pose significant security threats and privacy risks. Another privacy issue is data security. Data in IoT are usually scattered among different devices, cloud servers, edge nodes, and sensors. These data need to be transmitted and stored, and the networks and devices used for transmitting and storing data face various security threats. For example, there may be hackers attacking the network, data centers being stolen, edge devices being eavesdropped or tampered with, and so on. These issues could all lead to data leakage and security risks. In addition, due to the inconsistency of data formats and standards among different devices and systems, data cannot be effectively shared and utilized, resulting in the problem of data silos. This not only limits the application and effectiveness of IoT but also leads to inefficiency in data management and analysis. This is because many data are stored and processed in isolation on different devices or systems, resulting in data fragmentation and the inability to achieve complete data analysis and application. With the growth of data and the increase in data transmission, IoT edge computing systems must handle more and more sensitive data, including personal privacy data and business confidential data. However, privacy and data silos are not only challenges faced by IoT edge computing but also important obstacles restricting the development of IoT technology. In order to solve these problems, federated learning has become an important solution.
Federated learning is a distributed machine learning approach that allows multiple devices or data sources to collaborate in learning without exposing raw data [3,4]. This approach not only reduces the cost of data transmission and storage, but also better protects privacy and data security, thereby avoiding privacy leaks and data loss issues. By training models without sharing data, federated learning protects the privacy of users participating in the training and improves the privacy protection and training effectiveness of edge computing in the IoT [5]. However, in the edge computing environment of the IoT, the application of federated learning faces many challenges, the most significant of which are heterogeneous devices and malicious nodes. Heterogeneous devices refer to devices participating in federated learning that have different computing capabilities, bandwidth, and data, which leads to training imbalance and instability [6,7]. Moreover, this leads to a high dimensionality of the solution space for the node selection problem in federation learning. Heuristic algorithms are prone to fall into local optimal solutions and fail to find global optimal solutions when faced with such complex problems. In federated learning, each device only uses its own local data for training, so the computing power and data quality of the device have a direct impact on the effectiveness of federated learning [8,9]. At the same time, network bandwidth between devices can also affect the training speed and effectiveness of federated learning. In federated learning, each participant, as a node, trains local data and then uploads the trained model parameters to the server for global model updates [10]. Due to the diversity and uncertainty of participants, the presence of malicious nodes may have a serious impact on the training effectiveness of federated learning [11,12]. Malicious nodes may engage in a variety of behaviors, such as transmitting false model parameters or intentionally destroying model parameters. For example, some participants may transmit incomplete or tampered data, or maliciously modify the training model to achieve their private interests or destroy the global model. These malicious behaviors may result in a decrease in the accuracy of the global model or complete collapse, seriously affecting the effectiveness and application value of federated learning. All of these issues need to be properly addressed in federated learning to enable effective model training on edge devices and ensure user privacy and security.
In summary, there are the following issues with applying federated learning in edge computing:
1.
The node selection strategy in federated learning is not targeted enough, and there are few selection mechanisms specifically designed for IoT environments. Most selection mechanisms are based on random selection.
2.
There are many heterogeneous devices in IoT edge computing, with different computing power, bandwidth, and data, which leads to training imbalance and instability; 3.
There are some malicious devices in IoT edge computing that upload outdated or incorrect local models for various reasons, which negatively impact the convergence of the global model.
To address the problems associated with applying federated learning in edge computing networks mentioned above, this manuscript proposes the following solutions: 1.
This manuscript proposes using deep reinforcement learning methods instead of traditional heuristic methods to select terminal devices to improve the accuracy of selection; 2.
This manuscript proposes measuring the resource properties of IoT devices to determine their likelihood of participating in federated learning and improve the algorithm's applicability in IoT environments;
3.
To address the issue of devices uploading outdated or incorrect local models, this manuscript proposes a node credibility measurement scheme to eliminate the impact of malicious nodes on federated learning in edge computing networks.
Federated Learning
In addition to privacy and security issues, the uneven distribution of data, communication network resources and computing resources will lead to low efficiency of model training. In order to further optimize the model iterative updating process and improve the efficiency of federated learning, researchers have conducted a lot of related research on these problems and different scenarios [13][14][15]. Because the training process of federated learning needs many iterations to update the training parameters, it causes a large communication overhead. Some researchers have performed research on optimizing the communication process for this problem. One of the research directions is the method of compressing the data that need to be updated by the model. For example, Sattle et al. [16] proposed a compression framework of sparse ternary compression. This framework extends the existing compression technology of Top-k gradient thinning through a novel mechanism for the existing federated learning compression methods that either only compress the upstream communication from the client to the server (without compressing the downstream communication), or only perform well under ideal conditions (e.g., independent and identical distribution). The hierarchical and optimal Golomb coding of downstream compression and weight updating is realized so that the federated learning communication mode is optimized, especially in the learning environment with limited bandwidth. The research direction of some researchers is to design new mechanisms and learning algorithms. Mills et al. [17] proposed a multi-task joint learning system, which benefits the accuracy of user models by using distributed Adam optimization and introducing a non-joint patch batch standardization layer, and only needs to upload a certain proportion of user data for model integration each time the model is updated. Guo et al. [18] proposed a novel design of a transceiver and learning algorithm that simulates the analog gradient aggregation (AGA) solution, which significantly reduces the multi-channel access delay. Wu et al. [19] proposed a framework for automatically selecting the most representative data from unlabeled input streams so as not to accumulate a large number of data sets due to the storage limitations of edge devices and proposed a data-replacement strategy based on contrast scores, that is, measuring the representation quality of each data without using labels, indicating that the data with low quality are not effectively learned by the model and will remain in the buffer for further learning, while data representing high quality are discarded.
Federated Learning Based on Edge Computing
As an extension of cloud computing, edge computing deploys computing resources in the edge network near the user side [20][21][22][23]. The terminal equipment can directly perform data analysis, storage and calculation at the edge node, realizing the service requirements of low delay, short communication distance and high reliability. As a learning mode for long-time distributed interaction with terminal devices, federated learning can effectively improve the performance of federated learning if edge computing can be used for task training or merging in advance. However, generally, edge nodes are different from cloud computing centers, in that their computing and communication resources are limited. Under the framework of a large-scale federal learning network, a large number of terminals will communicate and calculate based on edge computing, which is prone to the problem of communication bottlenecks and uneven resources, resulting in delay "short board". Therefore, it is necessary to optimize resource scheduling under federated learning based on edge computing. First, based on the traditional federal learning framework, Shi et al. [24] proposed a joint equipment scheduling and resource allocation strategy. According to the number of training rounds and the number of scheduled equipment in each round, communication and computing resources are jointly considered, and a greedy equipment scheduling algorithm is designed to maximize the model accuracy under the condition of time constraints. Liu et al. [25] considered that in the federated learning scenario based on edge computing, by splitting the model, some models are reserved for local training, and the rest of the models are unloaded to edge nodes for training, thus reducing the training task of end users but at the same time increasing the overhead of the communication resources. Zhang et al. [26] proposed a federal learning-based service function chain mapping algorithm to solve the resource allocation problem of air-space integration networks and effectively improve resource utilization.
In addition, some researchers innovated and optimized the framework of federal learning. Luo et al. [27] introduced a novel hierarchical joint edge learning framework, in which some model aggregations are migrated from the cloud to the edge server, and further optimized the joint consideration of computing and communication resource allocation and edge association of devices under the hierarchical joint edge learning framework. Hosseinalipour et al. [28] proposed a multi-layer federated learning framework in heterogeneous networks, which takes into account the heterogeneity of the network structure, device computing capacity and data distribution, and realizes efficient federated learning by offloading learning tasks and allocating communication and computing resources accordingly. Xue et al. [29] implemented a clinical decision support system based on federated learning in edge computing networks. The double deep Q network was deployed at the edge node, and a stable and orderly clinical treatment strategy was obtained. Considering the constraints of link limitation, delay limitation and energy limitation, Lyapunov optimization was used to improve the convergence of the system.
System Implementation
The process of the federated learning node selection mechanism based on deep reinforcement learning designed in this manuscript is shown in Figure 1 below. The physical network environment composed of IoT devices and the policy network constitutes the entire deep reinforcement learning system. When a federated learning request arrives, the policy network, acting as an intelligent agent, extracts a specific feature matrix from the physical network as input based on the current state of the IoT devices. The training is conducted in an environment built by the physical resource state, and this process is considered the environment sending a state to the agent. The intelligent agent infers the federated learning node selection decision based on the training, which is considered an action applied to the environment. The environment provides the agent with a reward signal based on the execution effectiveness of the action. The agent continually optimizes the action by interacting with the environment to accumulate the maximum reward signal.
Feature Extraction
Training environment and methods have a great impact on training effectiveness. In order to train the agent in an environment closer to the real network, this paper proposes to extract the following four device features as the device attributes extracted by deep reinforcement learning: For IoT devices, due to their requirements for low power consumption and small size, their computing power is usually limited, and computing power is also an important measure of whether an IoT device is suitable for participating in federated learning and its computing ability. For the computing power of IoT devices, this chapter uses the computing power of their processors to measure, usually expressed in FLOPS (floatingpoint operations per second). FLOPS refers to the number of floating-point operations that a device can complete in unit time and is an important indicator of computer performance. Generally, the FLOPS of IoT devices can be calculated using the following formula: where FLOPS i represents the FLOPS value of IoT device i, CPU_Fre i represents the CPU frequency of the device, CPU_Core i represents the number of CPU cores of the device, and CPU_FPU i represents the number of FPUs of the device.
Communication Model
The communication resources of IoT devices refer to the network resources required for devices to communicate, including bandwidth, network delay, network stability, etc. The adequacy of communication resources directly affects the communication quality and stability of the equipment. In IoT, different devices have different communication resources. For example, infrastructure devices usually have strong communication resources, which can support high-speed and stable data transmission, while some edge devices may have relatively limited communication resources, which need to be scheduled and optimized according to their specific usage scenarios and needs. For the communication resources of IoT devices, adequate evaluation and management are required to ensure the communication quality and stability of the devices. At the same time, it is also necessary to consider the allocation and utilization of communication resources during device design and deployment to meet the communication needs of the device.
In IoT, communication between devices can use different wireless technologies, such as Bluetooth, Zigbee, Wi-Fi, etc. Typically, these technologies employ radio frequency-based wireless communication techniques. In this context, the communication capability of the IoT device is measured by measuring its bandwidth, channel, modulation method, and signalto-noise ratio. This section uses the following formula to calculate the communication model of the device: where DTE i represents the data transmission efficiency of IoT device i, CC i represents the channel capacity of the device, MRE i represents the modulation efficiency of the device, SNR i represents the signal-to-noise ratio of the device, and BW i represents the bandwidth of the device.
Data Quality Model
In IoT applications, the data quality is crucial, as it affects subsequent data analysis and applications. The model used to evaluate the quality of data generated by IoT devices is known as the IoT device data quality evaluation model. The total data quality management (TDQM) model can help determine whether the data generated by IoT devices are reliable, accurate, consistent, complete, and usable, thus improving the accuracy and credibility of data analysis [30]. For edge IoT devices, the TDQM model based on data accuracy and data integrity is chosen in this paper to evaluate the quality of data contained in edge IoT devices by assessing the data quality through the source and availability of data. The calculation method is as follows: where Data i represents the data metric of device i, while TDQM i represents the local data quality of the device under the TDQM model.
Equipment Contribution
In federated learning, each device participating in training needs to upload its locally trained model parameters so that the server can integrate them into a global model. However, some devices may be unwilling or unable to upload their local models' correct or latest versions due to various reasons, such as network issues, computational resource limitations, or privacy protection. This may have a negative impact on the performance of the global model. Therefore, it is necessary to measure the contribution of each device to identify and exclude unreliable or low-contributing devices, thereby improving the quality and convergence speed of the global model. We evaluate the contribution of each device by the improvements made to the global model parameters by the local model parameters. In this paper, the contribution of IoT device i can be defined as follows: where V i,k represents the contribution value of device i to the global model in the k-th round of training, w i,k represents the local model parameters of device i after the k-th round of training, w k represents the global model parameters after the k-th round of training, and σ k represents the sum of weights of all devices after the k-th round of training. For terminal device i, after extracting the above network properties, the terminal device attributes are combined into a feature vector, represented as All the feature vectors are then combined into a four-dimensional feature matrix as follows: Whenever a new round of federated learning is required, the policy network will extract the above feature matrix from the terminal devices as input, providing the intelligent agent with a training environment. At the same time, the feature matrix will be continuously updated as the terminal device's resources are consumed.
Policy Network
The policy network in deep reinforcement learning is used to output a policy that selects the next action based on the current state. In our proposed method, we are selecting devices with probability greater than a specific value based on the probability of the policy network output, rather than selecting a specific number of devices to participate in the training. As shown in Figure 2, the policy network structure designed in this chapter includes four layers: extraction layer, convolutional layer, probabilistic layer, and output layer. • Extraction layer: The extraction layer, also known as the input layer, is primarily responsible for converting the input raw data into a format that can be processed by the deep neural network, usually by standardizing, normalizing, and other processing methods. In this chapter, the extraction layer extracts the feature matrix from all terminal devices based on their current states, and uses it as the input to the policy network. The feature matrix is then transferred to the next layer of the policy network. • Convolutional layer: The convolutional layer is a commonly used layer structure in deep learning. It uses convolutional kernels to perform convolution operations on input data in order to extract features. In this chapter, the convolutional layer performs convolution operations on the input vector according to the following equation:
Matr ix
where y i,j denotes the output matrix, I denotes the input matrix, and K denotes the convolution kernel. ∑ m ∑ n I i+m,j+n K m,n denotes the element I i+m,j+n of the input matrix multiplied by the element K m,n of the convolution kernel matrix. The ReLU activation function is then used to connect the fully connected layers as follows: The generated vectors are passed to the probability layer in order to generate the probability of each node. • Probability layer: The probability layer uses the softmax function to compute the feature vector and generate the probability of each terminal device. The softmax function can map the elements of a K-dimensional vector to a K-dimensional probability distribution, where each element represents a probability value in the corresponding distribution. Specifically, for a federated learning network consisting of n terminal devices, the probability layer outputs an n-dimensional probability distribution, where each element represents the probability of selecting a terminal device. In this chapter, the calculation of the probability of device i participating in federated learning can be represented by the following formula: among them, the denominator is the sum of the exponential functions of all elements, and the numerator is the exponential function of v i . In this way, the value of P i is the probability value corresponding to the dimension where v i is located, and the sum of all v i is equal to 1. • Output layer: The output layer outputs IoT devices and their probability of participating in federated learning.
Model Training
In our study, we employ deep reinforcement learning to implement a node selection strategy as shown in Figure 2. We first randomly initialize the parameters of the policy network and train it for several epochs. For the node selection task in each training iteration, we extract the feature matrix from the federated learned node set as the input of the policy network. The policy network outputs a set of available nodes and the selection probability of each node according to the node feature vector. The selection probability of each node represents the possibility of it being selected to participate in federated learning to produce better results. In the training phase, we do not select a fixed number of nodes to participate in the training but select devices whose probability value is greater than the threshold we set to participate in the training. The selected nodes will participate in the training process of federated learning and work together to learn the global model. Our node selection strategy is flexible and does not limit the number of nodes selected each time. This means that our method can adapt to federated learning scenarios of different scales and select the appropriate number of nodes according to the demand.
In deep reinforcement learning, unlike supervised learning, we do not have label information in the training data to guide the training process [31,32]. Instead, our learning agent relies on reward signals to evaluate the effectiveness of its actions. The magnitude of the reward signal indicates the agent's decision quality, with larger reward signals indicating good decisions, while smaller or even negative reward signals indicate misbehavior that needs to be adjusted [33]. The choice of reward is crucial to the training process and the formation of the final policy. In the federated learning node selection problem based on deep reinforcement learning, the effect of each round of federated learning is used as a reward signal [34]. This indicator can better reflect the contribution of all devices to the global model aggregation of federated learning under the current selection scheme, which is very representative [35,36]. Therefore, after each round of federated learning, the agent calculates the reward signal according to the aggregation effect of the global model, and updates the parameters of the policy network to optimize the performance of the policy network. Through continuous iterative training, the policy network gradually learns the optimal node selection strategy, and can provide an efficient and reliable node selection scheme for federated learning. In practical implementations, due to the lack of real label information for node selection, we introduce hand-crafted labels to approximate the agent's decision. Suppose we choose the i-th and i+2-th nodes, then in the policy network, the manual label will be an all-zero vector y, except for the i-th and i+2-th positions, which are 1. By computing the cross-entropy loss between the output of the policy network and the hand-crafted labels, we can measure the deviation of the output of the policy network from expectations and use this loss to guide the training process.
In this manuscript, backpropagation is used to calculate the parameter gradient of the policy network. First, the loss function is calculated using cross entropy based on the training samples and the output of the policy network. Then, the backpropagation algorithm is used to calculate the gradient of the loss function with respect to the parameters of the policy network. The backpropagation algorithm uses the chain rule to calculate the gradient of each parameter, starting from the output layer, calculating the partial derivative of each neuron, and then calculating the gradient of each parameter layer by layer. Finally, the gradient descent optimization algorithm is used to update the parameters of the policy network. The gradient calculated by the backpropagation algorithm indicates which direction the parameters should be adjusted, while the gradient descent optimization algorithm tells us how much to adjust. In this way, the parameters of the policy network can be continuously adjusted to improve the accuracy and performance of the policy network.
Experimental Environment
For the node selection-optimized federated learning scheme (FL-IoTEL) proposed in this manuscript for IoT edge computing, we designed simulation experiments to verify its reliability. The simulation experiment simulates an IoT edge computing network with 10 edge nodes. Each edge node is connected to several IoT devices, and a total of 100 devices are involved. In the experiments we set up, there are 110 devices involved in node selection, of which 10 are edge nodes and 100 are IoT devices. Each edge node is connected to multiple IoT devices and is responsible for aggregating the local model of IoT devices into a global model. There are a total of 50,000 training images and 10,000 test images in the CIFAR-10 dataset [37]. CIFAR-10 has a slightly larger image size than MNIST and is in color. However, there are 10,000 more training images in the MNIST dataset than in CIFAR-10 [38]. For each IoT device, different hardware metrics (such as computing power and communication) and different data are assigned.
Simulation Results and Analysis
In the edge computing environment of the Internet of Things, federated learning needs to consider the characteristics of many heterogeneous devices, including but not limited to differences in hardware performance, network communication capabilities, data volume, and data quality. These differences can lead to variations in the quality and quantity of data provided by each device, as well as affecting the training speed and effectiveness of the devices. For example, some devices may have faster processors and higher memory capacities, enabling them to train and analyze more quickly, while other devices may be limited by lower hardware performance and take longer to complete the same task. Additionally, data may differ in quality and quantity depending on their source, with some data having better quality and more samples, while other data may have more noise or lack diversity. Therefore, federated learning needs to consider the heterogeneity of devices and data and use appropriate federated algorithms and node selection strategies to effectively utilize these heterogeneous resources, improve the training effectiveness and inference speed of models, and protect the privacy of devices. This section designs experiments to verify the performance of the node selection strategy for federated learning based on deep reinforcement learning under the edge computing of the Internet of Things, and compares the algorithm designed in this chapter with the traditional FedProx algorithm.
First, this manuscript compares the performance of the final global model under the condition that the data satisfy the independent and identically distributed (IID) assumption. Figure 3 shows the accuracy of the model when the local data on the IoT devices follow the IID assumption, and the size of the local dataset is the same. It can be seen that when the models of the two algorithms finally converge, there is little difference in the accuracy of the model on the test set. This is because the data distribution on the edge nodes is consistent, and the effect of randomly selecting nodes for learning is the same as that of purposefully selecting nodes on the server. However, because the scheme considered in this article not only considers the heterogeneity of node data but also the heterogeneity of resources, the algorithm proposed in this chapter is slightly better than the traditional FedProx algorithm in terms of final convergence. From Figure 3, it can also be seen that both schemes have good performance on the MNIST dataset but perform poorly on the CIFAR-10 dataset. This is because the CIFAR-10 dataset contains more information than the MNIST dataset, and the model performance is not as good as that on MNIST. This also proves that different datasets have a significant impact on learning models, and in scenarios with node selection, small-scale algorithms can always achieve optimal results. Then, experiments were conducted for devices with Non-IID data. Figure 4 shows the training accuracy when the data on IoT devices are Non-IID. It can be seen from Figure 4 that when the data are Non-IID, the algorithm proposed in this chapter has a significantly higher accuracy than the traditional FedProx algorithm. This is because when the data are IID or the differences between the data of each device are small, there is not much difference between the traditional random device selection algorithm and the federated learning node selection strategy based on deep reinforcement learning used in this chapter. However, when the data differences are large, the algorithm described in this chapter performs well because it takes into account the device data.
What is more, this manuscript simulated the performance in different Non-IID scenarios, and used the variance of local data distribution to represent the size of data heterogeneity. The larger the variance, the greater the heterogeneity of local data on terminal devices. As can be seen in Figure 5, all devices perform almost identically, and the results of random node selection are consistent with the algorithm-based node selection. Therefore, the results shown in the figure appear. As the variance increases, the superiority of the proposed algorithm in this scenario is reflected because in this case, selecting nodes that are more useful for the global model is more reasonable than randomly selecting nodes. In addition, the number of users also affects the accuracy of the test set. The more nodes participate in the learning process, the better the performance of the learned model will be; of course, the learned model will be slightly better than the number of nodes that are less. Finally, experiments were conducted to analyze the performance of the algorithm under device heterogeneity, and the experimental results are shown in Figures 6 and 7. As can be seen from the figures, when there is device heterogeneity, the experimental results are similar to those of data heterogeneity. As shown in Figure 6, when the performance of the devices is similar, there is not much difference in training accuracy between the two algorithms. However, when there is significant performance difference between the devices and some devices perform poorly, the algorithm proposed in this chapter has a more obvious advantage over the traditional FedProx algorithm. This is because when there is significant device heterogeneity, some devices may not be able to complete the data training task, resulting in lower accuracy of the global model. The node selection strategy proposed in this chapter can better play its role by selecting better-performing devices to participate in training, thereby ensuring the efficiency and accuracy of federated learning.
Conclusions
This manuscript optimized the federated learning technology in edge computing for the Internet of Things. A node selection strategy based on deep reinforcement learning was proposed to select IoT nodes to participate in the federated learning training, ensuring efficient participation of heterogeneous IoT devices and improving the privacy protection ability of edge computing. The experimental results showed that the proposed method in this manuscript can improve the training accuracy by 30% in the heterogeneous device IoT environment. This manuscript provides a new perspective to solve the privacy protection problem in edge computing for the Internet of Things and proposes a node selection strategy based on deep reinforcement learning to optimize the federated learning technology. This strategy can ensure the efficiency of heterogeneous device participation in training and improve the accuracy of the model under the premise of privacy protection. The research results of this chapter can provide new ideas and methods for privacy protection in edge computing for the Internet of Things and are expected to be more widely used in practical applications. | 2023-06-02T15:14:19.851Z | 2023-05-31T00:00:00.000 | {
"year": 2023,
"sha1": "d0cbe7a5de21deb16ef44282bfbeba1dbf84a5e1",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/electronics12112478",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3186c92587748aaeb80711a2bfb8f08a834c18b0",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
220404400 | pes2o/s2orc | v3-fos-license | Edge-Corner Correspondence: Boundary-Obstructed Topological Phases with Chiral Symmetry
The bulk-edge correspondence characterizes topological insulators and superconductors. We generalize this concept to the bulk-corner correspondence and the edge-corner correspondence in two dimensions. In the bulk-corner (edge-corner) correspondence, the topological number is defined for the bulk (edge), while the topological phase is evidenced by the emergence of zero-energy corner states. It is shown that the boundary-obstructed topological phases recently proposed are the edge-corner-correspondence type, while the higher-order topological phases are classified into the bulk-corner-correspondence type and the edge-corner-correspondence type. We construct a simple model exhibiting the edge-corner correspondence based on two Chern insulators having the $s$-wave, $d$-wave and $s_{\pm }$-wave pairings. It is possible to define topological numbers for the edge Hamiltonians, and we have zero-energy corner states in the topological phase. The emergence of zero-energy corner states is observable by measuring the impedance resonance in topological electric circuits.
Topological insulators and superconductors are one of the most studied fields in this decade [1][2][3][4] . A topological number is defined for the bulk. The bulk gap must close at a topological phase transition point because the topological number cannot change its quantized value without gap closing. Consequently, the topological phase is evidenced by the emergence of gapless edge states although the bulk is gapped. This phenomenon is well known as the bulk-edge correspondence.
We generalize this concept to the bulk-corner correspondence and the edge-corner correspondence. For definiteness we consider two-dimensional systems, although generalization to higher dimensions is straightforward 5 . In the bulkcorner (edge-corner) correspondence, the topological number is defined for the bulk (edge), while the topological phase is evidenced by the emergence of zero-energy corner states. There is a case where the topological number for the bulk is expressed in terms of two topological numbers for the two edges. It is proper to regard such a system to belong to the edge-corner-correspondence type. A typical example is given by systems having square lattice structure 6 .
Here we summarize the properties whether the gap is open or closed in the trivial and topological phases in the system subject to the bulk-edge, bulk-corner and edge-corner correspondences as follows: bulk-edge: bulk-corner: edge-corner: where o and × indicate that the gap is open and closed; PTP stands for the phase-transition point. It is important to reexamine higher-order topological insulators [6][7][8][9][10][11][12][13][14][15][16][17][18][19] and superconductors 8,[20][21][22] according to this classification. Clearly they are classified into these two types: A typical example of the edge-corner-correspondence type is given by the quadrupole insulator 6,12 , while a typical example of the bulk-corner-correspondence type is given by the Kagome lattice 15 . The notion of boundary-obstructed topological phases was recently introduced 23 . Several works on them have succeedingly been reported [24][25][26] . It has been proposed that they are realized in ion-based topological superconductors 27 . These phases are characterized by the property that the bulk-band gap does not close but the edge-gap closes at the topological phase transition point. Referring to the properties listed in Eqs. (2) and (3), they must belong to the edge-cornercorrespondence type.
In this paper, we construct a simple model realizing the edge-corner correspondence and the boundary-obstructed topological phase in order to make a clear understanding of these phenomena. The model consists of two Chern insulators with the opposite Chern numbers. We introduce pairing interactions including the s-wave, d-wave and s ± -wave pairing. The system has chiral symmetry. The topological class is BDI, which allows a topological phase in one dimension but none in two dimensions 28 , Accordingly, a topological phase transition may occur, where the edge gap is closed while the band gap is open, and zero-energy corner states emerge in the topological phase. It is possible to define the topological numbers Γ x and Γ y for the edge Hamiltonians along the x and y axes, and the topological number Γ x Γ y for the bulk. We point out that the edge-corner correspondence is observable by the impedance measurement in electric circuits.
Model Hamiltonian: We start with the Chern insulator on square lattice, whose Hamiltonian is given by with Pauli matrices σ i , spin-orbit interactions λ x and λ y , and where t x and t y are hopping parameters, and µ is the chemical potential. We propose to construct a chiral symmetric model as where we assume a pairing Hamiltonian 20 in the form of with Here, ∆ 0 , ∆ d and ∆ s± are the gap parameters due to the swave, d-wave and s ± -wave pairings 29,30 , respectively. Symmetries: The Hamiltonian H has chiral symmetry, {H, τ y } = 0. The system has time-reversal symmetry, where K takes complex conjugation. The system has particle-hole symmetry, ΞH (k) Ξ † = −H (−k) with Ξ = σ x τ z K. Hence, the system belongs to the class BDI, where there is no topological phase in two dimensions 28 . In addition, the system has parity symmetry P H (k) P † = H (−k) with P = σ z .
Bulk: The energy spectrum of H is given by where is the energy of H Chern , and is the energy of H ∆ . The bulk-band gap is The bulk gap E never closes when the E Chern is gapped. We show the band structure for various ∆ d in Fig.1(a1)∼(a3) by setting t x = t y = λ x = λ y = µ = t, ∆ 0 = 0.5t and ∆ s± = 0 for simplicity. The bulk gap is independent of the magnitude of ∆ d .
Edge theory: We show the band structure of a nanoribbon for various ∆ d in Fig.1(b1)∼(b3). The edge states become gapless at a certain phase-transition point as in Fig.1(b2). In order to determine this point and to construct the topological phase diagram, we derive the edge Hamiltonians along the x and y axes.
We first construct the edge Hamiltonian along the y axis. We make a Taylor expansion at the Γ point and decompose the Hamiltonian (6) into the unperturbed Hamiltonian H 0 and the perturbed Hamiltonian where we have omitted insignificant k 2 y terms 20 .
with m = µ − t x − t y . By taking an expectation value of H 1 by ψ, we obtain the edge Hamiltonian along the y axis as where α = x, y; η x = 1 for the lower and η x = −1 for upper edges. The edge gap closes at∆ x = 0.
Similarly we obtain the edge Hamiltonian for the x axis as where η y = 1 for the right and η y = −1 for left edges. The edge gap closes at∆ y = 0. Edge symmetries: The edge Hamiltonian H α has chiral symmetry, {H α , τ y } = 0. The system has time-reversal sym- where K takes complex conjugation. The system has particle-hole symmetry, ΞH (k) Ξ † = −H (−k) with Ξ = τ z K. Hence, the system belongs to the class BDI, where there is a topological phase in one dimensions 28 . In addition, the system has parity symme- Edge topological numbers: The edge chiral index is defined by with α = x, y, which is a symmetry-protected-topological number associated with the chiral symmetry. With the aid of the relation it is calculated as We can differentiate the trivial and topological phases by the conditions∆ α > 0 and∆ α < 0, respectively. Corner theory: We calculate numerically the energy spectrum and show it as a function of ∆ d for square geometry in Fig.2(a). It consists of the bulk and edge states shown in blue and cyan, respectively. On the other hand, we can derive analytically the bulk-band gap as in Eq.(12) and the edge-band gap as in Eq. (17), which we show in Fig.2(b). The numerical and analytical results agree very well. Additionally, we have shown zero-energy corner states in magenta in Figs.2(a) and (b), which we derive analytically in what follows. It is prominent that the edge gap closes only at the phase transition point (∆ α = 0). We show the local density of states (LDOS) for the zero-energy corner states in Fig.2(c).
We study the corner states analytically. They are described by the Jackiw-Rebbi solutions 31 of the edge Hamiltonians. The zero-energy solution along the x-axis is derived by solving (18), or The solution is obtained as ψ Similarly, the zero-energy solution along the y-axis reads ψ y (y) = exp ∓(η y /λ y )∆ y y .
The wave functions are well defined when they converge, leading to the condition,∆ x∆y < 0.
The zero-energy corner states emerge only in this case.
Topological phase diagram: We construct the topological phase diagram in the (∆ x ,∆ y ) plane. As we have just derived, the condition for the emergence of the zero-energy corner states is given by (25), or∆ x∆y < 0, where the system is topological. On the other hand, the system is trivial for∆ x∆y > 0, because there are no zero-energy states. The topological phase diagram is determined by these conditions as in Fig.2(d), with the phase boundary being given bȳ ∆ x∆y = 0.
We may define the topological number for the bulk, which reproduces the phase diagram determined in terms of∆ x∆y . With the use of Γ α defined by (21), it is given by where Γ xy = 1 for the trivial phase and Γ xy = −1 for the topological phase as in Fig.2(d).
Electric-circuit realization is based on the correspondence 32,33 between the Hamiltonian H and the circuit Laplacian J such that J = iωH. The Hamiltonian H Chern is already discussed in electric circuits 41 . The circuit corresponding to Hamiltonian H can be constructed in a similar manner. The capacitance contribute to iωC, while the inductance contributes to 1/iωL in the circuit Laplacian. It corresponds to the positive and negative hoppings in the Hamiltonian. The imaginary hoppings are represented by the operational amplifiers 40 . We tune the frequency ω to be the critical frequency ω 0 = 1/ √ LC so that the circuit Laplacian is identical to the Hamiltonian.
Admittance is obtained by the eigenvalue of the circuit Laplacian 34 , which corresponds to the eigen energy of the Hamiltonian. We show the admittance spectrum as a function of ω in Fig.3(a). The zero-admittance state appears in the topological phase, while it is absent in the trivial phase.
The circuit Laplacian is explicitly given by with It is straightforward to design electric circuits corresponding to this circuit Laplacian.
We analyze the impedance 34 , which is defined by Z ab = V a /I b = G ab where G = J −1 is the Green function. It diverges at the frequency where the admittance is zero (J = 0). Note that the impedance and the admittance have inverse relation. Taking the nodes a and b at two corners, we show the impedance in the trival and topological phases in Figs.3(b1) and (b2), respectively. A strong impedance peak is observed at the critical frequency ω 0 in the topological phase, while it is absent in the trivial phase. It signals the emergence of zeroenergy corner states. Hence, the edge-corner correspondence is observable in electric circuits.
Discussions: We have presented a classification of various higher-order topological phases based on the bulk-corner and edge-corner correspondences. We have constructed a simple model exhibiting the edge-corner correspondence, where topological numbers are defined for edges. It provides us with a clear picture of the boundary-obstructed topological phase transition. Furthermore, we have argued that the emergence of topological corner states is detectable by observing impedance peaks in topological electric circuit. The merit of electric circuits is that we can tune the values of elements relatively easily, which enables to realize a topological phase transition.
The generalization to three dimensions is straight forward. We may consider bulk-hinge, surface-hinge correspondences for the second-order topological phases 5 . We may also consider bulk-corner, surface-corner and hinge-corner correspondences for the third-order topological phases 5 .
The author is very much grateful to N. Nagaosa for helpful discussions on the subject. This work is supported by the Grants-in-Aid for Scientific Research from MEXT KAK-ENHI (Grants No. JP17K05490 and No. JP18H03676). This work is also supported by CREST, JST (JPMJCR16F1).
Supplemental Material
Edge-Corner Correspondence: Boundary-Obstructed Topological Phases with Chiral Symmetry Motohiko Ezawa Department of Applied Physics, University of Tokyo, Hongo 7-3-1, 113-8656, Japan S1. Second-order topological phases in three dimensions The second-order topological phases in three dimensions are characterized by the phenomena that gapless hinge states emerge although the bulk and the surface are gapped. They are classified into the bulk-hinge-correspondence type and the surfacehinge-correspondence type according to the geometrical object upon which the topological numbers are defined. Hereafter, let us abbreviate the A-B-correspondence type as the A-B type. (i) In the bulk-hinge type, the bulk gap closes at the phase transition point. The topological number is defined for the bulk. (ii) In the surface-hinge type, the surface gap closes but the bulk gap remains to open at the phase transition point. The topological number is defined for the surface.
These phenomena are summarized as (i) bulk-hinge correspondence: (ii) surface-hinge correspondence:
S2. Third-order topological phases in three dimensions
The third-order topological phases in three dimensions are characterized by the phenomena that zero-energy corner states emerge although the bulk, the surface and the hinge are gapped. They are classified into three types, the bulk-corner type, surface-corner type and the hinge-corner type, according to the geometrical object upon which the topological numbers are defined. (i) In the bulk-corner type, the bulk gap closes at the phase transition point. The topological number is defined for the bulk. (ii) In the surface-corner type, the surface gap closes but the bulk gap remains to open at the phase transition point. The topological number is defined for the surface. (iii) In the hinge-corner type, the hinge gap closes but the bulk and surface gaps remain to open at the phase transition point. The topological number is defined for the hinge.
These phenomena are summarized as (i) bulk-corner correspondence: | 2020-07-09T01:01:26.015Z | 2020-07-08T00:00:00.000 | {
"year": 2020,
"sha1": "2c8a074978cc8e6b914735efd491bd0e5605c5f2",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2007.03884",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "750e5676c7106eedc7f42a21da7f6f4bad15bced",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
269737500 | pes2o/s2orc | v3-fos-license | Less is more? Communicating SDG orientation and enterprises ’ economic performance
As the interest in sustainable development increases, businesses can benefit from aligning their orientation with the Sustainable Development Goals (SDGs). It remains unclear, however, how focusing on a broader or narrower set of SDGs affects enterprises ’ economic performance. This study examines the impact of a communicated SDG orientation on the economic performance of social enterprises and traditional commercial businesses. Using natural language processing (NLP) techniques to analyse textual content from 661 enterprises ’ websites, we found a positive relationship between the communication of a narrow set of SDGs and enterprises ’ economic performance. The extent of this effect is similar between social and traditional commercial enterprises. Therefore, stakeholders may value an enterprise ’ s SDG orientation strategy that focuses on a narrow set of SDGs in distinct purpose-driven institutional contexts.
Introduction
Businesses are increasingly encouraged to adopt practices that contribute positively to social and environmental issues (Kuang, 2021).Recent literature emphasises the importance of enterprises balancing social and economic pursuits (Battilana et al., 2019).As such, the United Nations Sustainable Development Goals (SDGs) have become a widely accepted benchmark for enterprises' sustainability orientation (Amel-Zadeh et al., 2021).While a number of scholars suggest that the pursuit and communication of such goals could improve enterprise competitiveness (Demuijnck and Fasterling, 2016), doing good does not always mean doing well (Kautonen et al., 2020;Lynn, 2021;Mansouri and Momtaz, 2022).Recent studies demonstrated the dark side of sustainability orientation on enterprises' competitiveness (Kautonen et al., 2020;Muñoz and Kimmitt, 2019).As such, the extent of enterprises' SDG orientation can be a critical strategic choice for corporate performance (Giarratana and Pasquini, 2022;Hornstein and Zhao, 2018).
The integration of sustainability into corporate strategy, known as sustainability orientation (Roxas and Coetzer, 2012), has become a vital aspect of business strategy (Ahmić, 2022).One stream of literature is devoted to assessing enterprises' sustainability orientation by exploring corporate narratives through textual communication (e.g., Mansouri and Momtaz, 2022;Moss et al., 2018).Prior research suggests that communicating a broad set of SDGs (e.g., diverse social offerings) can positively impact an enterprise's economic performance (Landrum, 2018;Seo et al., 2021).This is because focusing on a variety of sustainability demands can enhance enterprises' legitimacy and competitiveness (Landrum, 2018).At the same time, scholars have suggested that focusing on a broad set of SDGs can negatively impact an enterprise's economic performance (Giarratana and Pasquini, 2022).The broad set of SDGs can increase enterprises' operational complexity (Battilana and Dorado, 2010), leading to business failure (Muñoz et al., 2018).This risk is particularly pronounced when there is a great emphasis on social goals, which is usually the case for social enterprises (Young, 2012).
Therefore, recent stakeholder theory developments challenge the doing well by doing good paradigm and recommend further exploring provisional and contextual aspects that reward sustainability orientation (Lynn, 2021).We contribute to this understanding by exploring to what extent a communicated enterprise's SDGs orientation positively impacts enterprises' performance.Our research question frames the inquiry: How does the communication of SDGs affect enterprises' economic performance?
We deploy an SDGs textual classifier based on Natural Language Processing (NLP) to examine the influence of enterprises' orientation towards the SDGs on the economic performance of social enterprises and traditional commercial enterprises.NLP is a computational approach that has the potential to advance management theories by effectively extracting valuable insights from a large amount of textual data that traditional approaches could not provide (Kang et al., 2020).
Theory and hypotheses development
Stakeholder theory posits that a firm should create value not only for investors but for all stakeholders (Freeman et al., 2010).The key idea that holds this concept is that enterprises focusing on purposes aligned with their main stakeholders are more likely to have economic success (Freeman, 2023).Yet, this can only happen if stakeholders are aware of enterprises' purposes, which can be achieved through effective communication of such purposes (Balmer, 2017;Lepkowska-White et al., 2022).For these reasons, the range of a communicated enterprise's SDG orientation can be crucial to balancing the dynamics of doing well and doing good (e.g., Giarratana and Pasquini, 2022;Seo et al., 2021).
Several studies grounded in stakeholder theory have found that sustainability orientation positively relates to enterprise performance (Muñoz and Kimmitt, 2019).The underlying rationale is that aligning business practices with the growing demand for sustainability (Landrum, 2018) can lead stakeholders to perceive sustainable businesses as more valuable than their less sustainable counterparts.Yet, recent studies proposed that sustainability orientation can help businesses, but it can also be costly and distract management from their commercial goals (Muñoz et al., 2018;Wang and Bansal., 2012).In this matter, scholars grounded in stakeholder theory suggest that the positive relationship between sustainability orientation and enterprises' economic performance is contingent on provisional and contextual aspects (Lynn, 2021;Kautonen et al., 2020).One of the key arguments is that enterprises can lose sight of their economic objectives when pursuing and communicating a broad range of social purposes (Kautonen et al., 2020).Therefore, enterprises' SDG orientation could represent a trade-off (Giarratana and Pasquini, 2022;Seo et al., 2021).We follow Kautonen et al. (2020), proposing that the positive relationship between sustainability orientation and enterprises' economic performance is contingent on the extent of enterprises' SDG orientation.
The stakeholder theory indicates that enterprises can enhance long-term value creation by prioritising the interests of a wide range of stakeholders (Freeman et al., 2010).Accordingly, Seo et al. (2021), based on the quantity of philanthropy causes within similar nature categories, suggested that enterprises can enhance their competitiveness by addressing a broad set of SDGs.Enterprises may reach more stakeholders by communicating a broader array of SDGs, as individuals can resonate with one or more of the enterprises' SDGs (Giarratana and Pasquini, 2022).It also increases the chance that enterprises reflect heightened investors' and other stakeholders' attention to the SDGs (Amel-Zadeh et al., 2021).At the same time, the stakeholder theory goes beyond this argument, suggesting that enterprises should generate "as much value as possible for stakeholders, without resorting to trade-offs" (Freeman et al., 2010, p. 28).While a wide range of SDGs can increase the number of supports, it can also increase the enterprise's tensions in dealing with many different stakeholders' demands (Amel-Zadeh et al., 2021;Giarratana and Pasquini, 2022).An antagonistic relationship usually leads to trade-offs that can undermine either enterprise's sustainability or profit objectives (Kautonen et al., 2020).Giarratana and Pasquini (2022) reinforce the argument by exploring enterprises' SDG orientation through product portfolio in terms of quantity.They suggested that a broad range of SDGs could increase business economic tensions by escalating costs, dividing focus on investments, and harming the enterprises' image among stakeholders.Therefore, by grounding on the stakeholder theory, we could assume that communicating a narrow set of SDGs can attract stakeholders with similar interests, decreasing the conflicts that generally lead to trade-offs (Giarratana and Pasquini, 2022).
Empirical studies explore the orientation of enterprises towards SDGs in terms of quantity and nature (e.g., Giarratana and Pasquini, 2022;Seo et al., 2021).Many SDGs are correlated by nature and tend to be treated simultaneously (Giarratana and Pasquini, 2022).Therefore, we explore the extent of enterprises' SDG orientation considering the number of goals (SDGs-quantity) and their similarity according to the nature of their outcomes (SDGs-similarity).The SDG outcomes can be classified according to the SDG "wedding cake" framework (Rockström and Sukhdev, 2022), where the layers represent the economy, society, and biosphere (Folke et al., 2016).Although the SDG outcomes are interconnected across various sustainability dimensions due to their systemic nature, their placement in a specific layer indicates an emphasis on a particular area (Fet et al., 2023).Therefore, we employ the wedding cake framework to assess each SDG from a business perspective (e.g., Fet et al., 2023).We hypothesise that focusing on a narrow set of SDGs that emphasise a specific layer of sustainability (economy, society, or biosphere) could enhance enterprises' economic performance, as summarised in H1 and H2.
H1.
There is a negative relationship between the quantity of SDGs orientation and enterprises' economic performance.
H2.
There is a positive relationship between enterprises' SDGs orientation emphasising similar sustainability outcomes and their economic performance.
Furthermore, explanations of the relationship between doing well and doing good "are couched within the details of relevant E. Culpi Mann et al. institutional context" (Lynn, 2021, p. 525).Social enterprises and traditional commercial enterprises operate within distinct institutional contexts (Kautonen et al., 2020).Although the literature lacks comparative insights on the topic, scholars offer some evidence.The centrality of social purposes over economic purposes distinguishes social enterprises from traditional commercial ventures (Bandyopadhyay and Ray, 2020).The higher the centrality of social purposes, as observed in social enterprises, the higher its social visibility (Giarratana and Pasquini, 2022).As such, higher social visibility can moderate the relationship between enterprises' SDGs orientation and economic performance (Giarratana and Pasquini, 2022).Still, similar to the dynamics presented by traditional commercial enterprises, an enterprise enhances legitimacy and competitiveness by conforming to stakeholder expectations (Bansal and Roth, 2000).
Social enterprises tend to face more scrutiny from the stakeholders over the authenticity of their social claims than traditional commercial enterprises (Giarratana and Pasquini, 2022).It occurs due to the visibility of social purposes as a central attribute of enterprises' competitiveness (Bandyopadhyay and Ray, 2020).A social enterprise failing to demonstrate commitment to one of the SDGs claimed could face authenticity threats to all other SDGs (Alhouti et al., 2016).Moreover, clear communication of value propositions is a crucial aspect of a social enterprise's legitimacy (Mersland et al., 2019).Communicating a broad set of SDGs may attract a diverse group of stakeholders, increasing the complexity of dealing with several stakeholders' expectations (Teasdale, 2010).In turn, the tensions can threaten the legitimacy of social enterprises (Battilana and Dorado, 2010), leading to reduced stakeholder support (Doherty et al., 2014;Klein et al., 2021) and potentially contributing to business failure (Battilana and Lee, 2014).We expect the dynamics of a narrow set of SDGs to be present from the vantage point of a social enterprise and a traditional commercial enterprise, as presented in H3.
H3.
The relationship between a narrow set of SDGs and enterprises' economic performance is positive for both social and traditional commercial enterprises.
Methods
The "About Us" section of enterprise websites is where they usually share information about "who they are" and "what they do" (e.g., Haans, 2019).In our observational study, we utilised the textual content in the "About Us" section to evaluate how enterprises strategically align with the SDGs.To overcome challenges in measuring SDGs properties (Amel-Zadeh et al., 2021), we applied the Open SDGs (OSDG), an NLP model proposed by Pukelis et al. (2022).The OSDG is a multilingual tool built from an anthology integrating existing SDG research for classifying text data by SDGs (Pukelis et al., 2022).We performed sanity checks to ensure this approach accurately identifies SDGs within a business context.
Sample and procedures
We sampled private enterprises from the PrivCo Database1 for the United States market.The PrivCo dataset is a financial data provider on major private companies and has been cited by many researchers in the field (e.g., Cao et al., 2017;Chen and Kelly, 2015).In addition, we used web scraping techniques to select enterprises that contained textual content related to the "About Us" on their websites and with text that met the criteria for the OSDG textual classifier.Our final sample includes 661 observations, with 105 (16%) social enterprises and 556 (84%) traditional commercial enterprises across various industry sectors.2
Measures
Dependent Variable.The dependent variable in our study is Productivity, which serves as a crucial measure of an enterprise's economic performance (Abbott et al., 2019;Battilana et al., 2015;Bagnoli and Megali., 2011;Lee and Seo, 2017).Productivity is a numerical variable represented by the natural logarithm of the ratio between the annual revenue and the number of employees for 2020.Productivity is a valid measure for comparing the economic performance of social and traditional commercial enterprises (Abbott et al., 2019;Lee and Seo, 2017) across different industry sectors (Battilana et al., 2015).Independent Variables.We draw the core independent variables, namely SDGs-quantity and SDGs-similarity, from the enterprises' "About Us" web pages using the OSDG textual classifier (Pukelis et al., 2022).SDGs-quantity represents the number of unique SDGs identified in each enterprise's textual content from SDG1 to SDG16 (e.g., Patuelli and Saracco, 2023).The measure is a continuous variable from 1 to 16 related to the number of SDGs; it moves towards 16 when more text content is matched across the different SDGs (e.g., value 3 = SDG1, SDG6, SDG12).SDGs-similarity classifies the SDGs presented in an enterprise narrative into SDG wedding cake layers according to the nature of the goal's outcome.It is measured from 1 to 3 based on the number of layers: economy, society, and biosphere.Value 1, 2, and 3 means that SDGs are classified under one layer, two layers, and three layers respectively.As presented in Fig. 1, the SDG wedding cake is based on the framework proposed by Rockström and Sukhdev, 2022, which assigns SDGs as follows: The top layer, the Economy, comprises goals 8, 9, 10, and 12.The middle layer, Society, encompasses goals 1, 2, 3, 4, 5, 7, 11, and 16.The bottom layer, Biosphere, includes goals 6, 13, 14, and 15.In addition, Type is a binary variable indicating whether a firm is a social or traditional commercial enterprise.We used the B Corp Certification to identify the businesses with core social hybrid organisation aspects in our sample (e.g., Cao et al., 2017;Siqueira et al., 2018).Control Variables.We used the number of employees to control for firm size.It reflects the scale of business operations and resource access (Cacciolatti et al., 2020).Age is a numerical variable indicating the number of years since the enterprise's founding.Sustainability orientation on financial outcomes may differ depending on the firm's age (e.g., Cacciolatti et al., 2020;Mansouri and Momtaz, 2022).Industry denotes the industry group a firm belongs to, and it is encoded as a dummy variable (e.g., Grimes et al., 2018).The relevance of the SDGs varies across industries due to the nature of operational sustainability impacts (Saetra, 2021).
Data analysis
Data analysis proceeds in three main steps.First, we used descriptive analysis to understand the SDG orientation among social and traditional commercial enterprises.Second, we tested H1 and H2 using OLS regression models and H3 using interaction effects.Third, we carried out robustness tests by checking multicollinearity and using multilevel models using R packages.
Descriptive statistics
Fig. 2 shows that the enterprises of our sample touched all labelled SDGs.In proportion, social enterprises tend to demonstrate higher levels of SDG orientation than traditional commercial enterprises.Moreover, by analysing the overall frequency, the SDG9-Industry, Innovation & Infrastructure (28.8%) is the most identified goal in our sample.In addition, SDG4-Quality Education (14.8%) and SDG8-Decent Work and Economic Growth (11.9%) are the second and third most common goals in the enterprises' narratives.In contrast, the SDGs related to biodiversity, SDG15-Life On Land (0.43%) and SDG14-Life Below Water (0.75%), followed by SDG2-Zero Hunger (0.75%) and SDG13-Climate Action (0.75%), are among the goals that received less attention.In addition, a fraction of enterprises is aligned with SDG10-Reducing Inequality (2%) and SDG5-Gender Equality (1%).Regarding the quantity of SDGs communicated per enterprise, 7 is the maximum number of goals identified with 1.7 as the mean value.
Regression models
Table 1 reports the results for our model specifications.In Model 1 we tested H1 and H2, and in Model Model 2 and Model 3 we tested H3.Adding the independent variables improved the model fit compared to using them solely as covariates F (4, 648) = 16.42,p < 00.001.We checked for multicollinearity using the VIFs, which are below the widely accepted threshold of 10 (e.g., Kautonen et al., 2020).The highest score is 7.8, which is a score from the interaction term.Next, due to the nested structure of the data, we used a multilevel model to account for differences across industries, Models 4, 5 and 6.We have 661 firms distributed among 7 industry sectors, where the majority (n = 238) are from financials, and the minority (n = 6) are from energy industry sectors.These results are similar to those in Models 1, 2 and 3.In H1, we tested the relationship between SDGs-quantity and economic performance.Model 1 reveals a significant negative association between the number of SDGs mentioned in an enterprise's narrative and the likelihood of higher productivity (β = − 0.34, p = 0.03).Our results support H1, suggesting that enterprises that present a narrow SDG orientation are more likely to present higher economic performance.In H2, we tested the relationship between SDGs-similarity and productivity.Model 1 demonstrates a non-significant relationship between enterprises' economic performance and SDGs-similarity (β = 0.19, p = 0.18).Our results do not support the proposition that a communicated SDG orientation strategy emphasising a specific layer of sustainability could benefit enterprises' economic performance.H3 anticipates a similar effect of SDG orientation on the performance of both social enterprises and traditional commercial enterprises.First, we added an interaction term between the enterprises' type and SDGs-quantity (β = 0.10, p = 0.62) for productivity in Model 3. Next, we added an interaction term between the enterprises' type and SDGs' similarity (β = 0.20, p = 0.29) for productivity in Model 4. In both cases, the interaction term is not significant.Thus, as anticipated, when comparing a social enterprise to a traditional commercial enterprise, the result indicates similar patterns in the effects of SDG orientation on economic performance.
Discussion
Drawing on stakeholder theory and recent literature on entrepreneurship, we assume that one critical strategic decision for doing well by doing good is the enterprise's SDG orientation (e.g., Freeman, 2023;Giarratana and Pasquini, 2022;Hornstein and Zhao, 2018).Our results demonstrated that enterprises communicating their purpose with a narrow set of SDGs in terms of quantity, regardless of the sustainability dimension of SDGs outcomes, tend to be positive for economic performance.This relationship tends to be similar among social enterprises and traditional commercial enterprises.Therefore, our insights demonstrates that the extent of SDGs communicated by firms holds significance for their stakeholders, which, in turn, positively impacts their economic performance.
The stakeholder theory suggests enterprises should create as much value as possible for stakeholders.Still, making as much value as possible does not mean placing enterprises' strategic orientation towards a large array of SDGs.As Freeman (2023) suggested, enterprises with a purpose aligned with their main stakeholders to guide the day-to-day operations are likely to achieve maximum value creation.Therefore, our findings corroborate the proposition that communicating a broad range of SDGs could increase enterprises' complexity and divert management attention away from commercial objectives (Giarratana and Pasquini, 2022).When comparing social enterprises and traditional commercial enterprises, Kautonen et al. (2020) suggested that adhering to stakeholder preferences for sustainability can be good for business if not taken too far (Kautonen et al., 2020).Our findings support that both social and traditional commercial enterprises should focus on a narrow number of SDGs to better balance sustainability orientation and economic performance.In addition, following the dynamics of authenticity (Alhouti et al., 2016) and mission drift (Battilana and Dorado, 2010), we understand that focusing on a broad range of SDGs can increase the risk of jeopardising enterprises' image among stakeholders by appearing less authentic, especially in the context of social enterprises.
Furthermore, our results demonstrated a non-significant relationship between a communicated SDG orientation strategy with similar sustainability outcomes and economic performance.This notion can be supported by the interconnectivity between different SDGs (Philippidis et al., 2020;Smith et al., 2021).The "wedding cake" framework illustrates how SDGs are interlinked (Folke et al., 2016).For instance, challenges like climate change, hunger, and poverty are correlated and should be treated simultaneously (Giarratana and Pasquini, 2022).Yet, poverty and hunger fall within the societal layer of the cake, while climate change is part of the biosphere layer.This illustrates the bidirectional relationship between the economy (top cake layer), which serves society (middle cake layer), which in turn operates within the biosphere (bottom of the cake) (Philippidis et al., 2020).Therefore, our results highlight a nuanced consideration of specific SDGs in a way that suits organisational purpose, instead of a more aggregated view of the sustainability dimensions.For enterprises, especially social enterprises (Kouamé et al., 2022), its long-term depends on whether sustainability propositions resonate with the stakeholders that support them (Giorgi, 2017).In this matter, scholars suggest that a category signals essential features to an audience, which helps stakeholders formulate their expectations towards that category (Romanelli and Khessina, 2005).Our results suggest that communicating a sustainability-oriented strategy at the goal level can serve as a signal that resonates better with stakeholders.Moreover, the SDG9-Industry, Innovation & Infrastructure from the economy layer is the goal that exhibits the highest frequency.This aligns with the literature suggesting that SDG9 is the most discussed goal in corporate practices (Mio et al., 2020).In contrast, the SDGs related to biodiversity, SDG15-Life On Land and SDG14-Life Below Water, are among the goals that received less attention.Biodiversity conservation is among the worldwide problems that businesses have less practised (Addison et al., 2019).In addition, our results demonstrated that a fraction of enterprises is aligned with SDG10-Reducing Inequality and DG5-Gender Equality, topics that heightened social attention in recent years, such as the emergence of the "me-too" movement in the U.S. (Amel-Zadeh et al., 2021).
In summary, the strategic selection of social goals can impact the balance between economic and social welfare logic (see Kautonen et al., 2020;Battilana and Dorado, 2010).Yet, the "responsiveness to stakeholders alone is not guarantee of performance" (Kautonen et al., 2020, p. 6).The stakeholder theory suggests that trade-offs must be avoided (Freeman, 2023).A key aspect might be a narrow SDG orientation strategy to align with their main stakeholders while decreasing conflicts.
As with many entrepreneurship research studies, there is limited availability of economic-financial data for private enterprises (Wasserman, 2017).Therefore, a common limitation is the relatively small sample size.In addition, this study only focuses on enterprises from the U.S. Given the increasing presence of social enterprises worldwide, we cannot fully generalize our results for countries with distinct institutional settings (e.g.,She and Michelon, 2023).
Implications for theory and practice
This study has implications for both research and practice.For research, this study contributes to the entrepreneurship literature and stakeholder theory by exploring and comparing conditions under which enterprises' SDG orientation positively influences economic performance across social enterprises and traditional commercial businesses (Abbott et al., 2019;Doherty et al., 2014;Lynn, 2021).In addition, this study serves as a proof of concept of the use of big data to overcome challenges in measuring enterprises' contribution to the SDGs (Amel-Zadeh et al., 2021;Mio et al., 2020;Patuelli and Saracco, 2023).Finally, we add to the literature by stimulating wider discussions on aligning enterprises' strategies with the SDGs (Günzel-Jensen et al., 2020).
For practice, given that SDGs challenge businesses worldwide (Rosati and Faria, 2019) and the limited literature directly addressing business and SDGs sparse (Mio et al., 2020), our results can inform social entrepreneurs and business leaders in reshaping their sustainability-oriented strategies.Our results inform social entrepreneurs and business leaders about the value of communicating a narrow set of SDGs for enterprises' economic performance.Furthermore, our findings suggest that enterprises should align their purpose with the main stakeholders (not all) to guide the day-to-day operations to achieve maximum value creation.Trying to do it all generates complexity and tension.A focused strategy would allow stakeholders to evaluate what the enterprise can achieve and attract those who share similar interests.It is an important insight for enterprises, especially social enterprises, that often have limited resources and must decide wisely about their allocation (Miles et al., 2014).
Conclusion
Our study investigates enterprises' SDG orientation through the lens of stakeholder theory using enterprises' textual communication.The findings show that stakeholders can positively value a narrow set of SDGs for both social enterprises and traditional commercial enterprises.For practitioners, this implies that focusing deeply on a smaller set of SDGs may be perceived better than trying to do it all.The main implication for future research is a call for empirical evidence that compares conditions under which enterprises can do well by doing good considering social and traditional commercial enterprises' contexts.
Declaration of competing interest
None. | 2024-05-12T15:17:00.203Z | 2024-01-01T00:00:00.000 | {
"year": 2024,
"sha1": "f77b7a0aff855a848042ff3fe0d8238c1625c0f9",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1016/j.jbvi.2024.e00470",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c90c751c584eec2ea85c461a6d367d23dc8bac64",
"s2fieldsofstudy": [
"Business",
"Economics",
"Environmental Science"
],
"extfieldsofstudy": []
} |
1435342 | pes2o/s2orc | v3-fos-license | Meeting Postpartum Women’s Family Planning Needs Through Integrated Family Planning and Immunization Services: Results of a Cluster-Randomized Controlled Trial in Rwanda
Integrating contraceptive services into infant immunization services was effective, acceptable, and feasible without negatively affecting immunization uptake. Yet unmet need for contraception remained high, including among a substantial number of women who were waiting for menses to return even though, at 6 months or more postpartum, they were at risk of an unintended pregnancy. More effort is needed to educate women about postpartum return to fertility and to encourage those desiring to space or limit pregnancy to use effective contraception.
INTRODUCTION
H ealthy timing and spacing of pregnancies (HTSP) improves the health of both mothers and their children. [1][2][3][4][5][6][7][8][9] Risks of miscarriage, abortion, and maternal death are much greater when births are spaced less than 2 years apart. [2][3][4][5]8 Preterm birth, low birth weight, stillbirth, and newborn death are also more likely when births are spaced too closely together. 2,[7][8][9][10] Unmet contraceptive need is high for many postpartum women in sub-Saharan Africa; across 21 low-and middle-income countries, an estimated 61% of postpartum women had unmet contraceptive need. 11 The extended postpartum period, 12 months after childbirth, can be a time of particularly elevated risk for an unplanned pregnancy. Research indicates as many as 40% of women who state they intend to use a contraceptive method 0-12 months postpartum do not do so. 12 To reduce unmet contraceptive need, postpartum women need access to family planning information and services, yet reaching these women in Africa is often difficult because many do not deliver within health facilities and even fewer attend routine postpartum visits. Most postpartum women do, however, seek routine health services for their infants, including for immunizations. Given their timing, infant immunization services provide an important opportunity to reach postpartum women repeatedly throughout the postpartum period.
Health service integration has become an important topic of discussion in global health. Intuitively, moving from vertically administered services to an integrated platform has the potential to improve service delivery efficiency, access, and uptake. Infant immunization services, with their worldwide success, have been an important focus for many integration efforts. 14 A variety of maternal and child health services has been integrated into immunization services, including vitamin A supplementation, deworming, malaria prevention, nutrition, and HIV services, and there have been some efforts to integrate family planning with infant immunization. 14 Although integrated family planning and infant immunization services are not new, limited evidence exists to support its effectiveness. One study in Togo (1994) demonstrated that a simple family planning referral message delivered during immunization services increased the number of family planning clients by over 50% without decreasing immunization service use. 15 A second study in rural Bangladesh (2001) showed that introducing integrated family planning and child immunization services increased contraceptive prevalence from 28% to 53%. 16 However, a more recent (2009)(2010) cluster-randomized controlled trial in Ghana and Zambia that used screening and referral to family planning services demonstrated no increase in contraceptive method use among postpartum women attending infant immunization. 17 The remaining evidence to support the strategy is largely derived from observational studies or programmatic experiences. 18,19 The situation in Rwanda reflects that of many other countries in the region. Despite improvements in health care access and use in the past decade, facility-based deliveries and postpartum care remain underutilized. Nearly one-third of women in Rwanda deliver their babies at home, and only one-fifth receive any postpartum care. 20 However, Rwanda has one of the most successful infant immunization programs in Africa. 21 According to 2010 Demographic and Health Survey estimates, 86% of children under 2 in Rwanda received all recommended vaccinations within their first year. 20 With high immunization attendance levels, immunization services could be an effective contact point for reaching postpartum women with family planning services in Rwanda. As such, we developed an intervention, using the Health Belief Model (HBM), to integrate elements of family planning services into infant immunization. The HBM model is one of the most widely used conceptual frameworks guiding the design of health behavior interventions and has been applied to sexual risk behaviors and contraceptive behaviors in a variety of populations including populations in sub-Saharan Africa. 22 Given the dearth of existing evidence, this study was designed to test the effectiveness of this enhanced offering of family planning services during infant immunization visits to increase contraceptive method use among postpartum women.
Study Design
The study was a separate sample, parallel, clusterrandomized controlled trial of a health services intervention designed to improve family planning use, thus reducing unmet contraceptive need among postpartum women attending public health care facilities in Rwanda. A cluster-randomized design was selected due to the design of the experimental intervention, which was delivered to both individuals and in group educational settings. Fourteen public primary health care facilities were randomly selected from a national sampling frame then randomly allocated to intervention or control groups of equal size (7 intervention facilities and 7 control facilities). A structured questionnaire was administered to postpartum women attending immunization services for their infant ages 6 to 12 months in all 14 study facilities during the baseline period (May-June 2010), immediately followed by intervention implementation in intervention group facilities (beginning in July 2010).
After baseline data were collected, intervention group immunization and family planning providers attended a 3-day training on postpartum family planning and the use of a screening tool to assess pregnancy risk among postpartum women. Providers in the control facilities received no training and continued to deliver services as usual. The endline questionnaire was administered to a separate sample of postpartum women 16 months after the beginning of intervention implementation.
Cost data were extracted from financial reports and confirmed with project personnel. To ensure intervention implementation fidelity, a study staff and a district Ministry of Health officer carried out quarterly supportive supervision visits.
Intervention Description
The primary goal of the intervention was to increase uptake of family planning methods among postpartum women, thus reducing unmet contraceptive need. This goal was to be accomplished through enhanced components of family planning services delivered during infant immunization services designed to improve access to family planning services and to improve knowledge with regard to postpartum family planning. The intervention included 4 distinct yet interrelated components that were delivered to women attending all infant immunization services (i.e., at 6, 10, and 14 weeks and 9 months post-delivery).
1. Concise messages delivered during group education sessions. Immunization service providers routinely provide education to women attending immunization services on a variety of health topics. The delivery of such information did not change in intervention facilities, but the content on family planning was strengthened and delivered at each session, in addition to any other information being delivered. In intervention facilities, providers included the following information in all pre-immunization group education talks in the facility: The risk of becoming pregnant after delivery of a baby if a woman is sexually active and not using a modern contraceptive method (which included the Lactational Amenorrhea Method [LAM] for women less than 6 months postpartum), even if she is breastfeeding or if her menses had not returned The benefits of family planning to help women and their families time and adequately space their pregnancies for the health of the mother and her baby Safe and effective contraceptive options for women to use during the postpartum period, even if breastfeeding 2. A simple brochure in the local language (Kinyarwanda) distributed during group education. The brochure contained messages about LAM, return to fertility and pregnancy risk during the postpartum period, the benefits of spacing pregnancies by at least 2 years, and contraceptive options for postpartum women. Brochures (Supplementary Material 1) were provided to women so they could share the information with their husbands.
3. Individual screening of all women attending infant immunization services by the immunization provider. An immunization provider met one-on-one with each mother, during which the provider asked the mother about the 3 LAM criteria to screen her for current risk of becoming pregnant. The provider also offered a brief counseling message A woman attending infant immunization services in Rwanda also receives screening for family planning services.
depending upon risk classification and referral to family planning services for those currently or soon-to-be at risk of pregnancy. Providers were trained on a job aid (Supplementary Material 2) to assist the screening and counseling process. The screening questions were asked either as the baby received his/her immunization or by a separate provider while women were waiting for their baby's immunization.
4. Convenient offer of family planning services to women attending immunization at the same facility and on the same day as immunization services. A family planning provider was available to receive clients as they were referred from immunization for family planning information/counseling and to initiate the woman's method of choice, per the facility's family planning service standard.
No changes were made to the way in which family planning services were delivered with the exception of timing of availability; however, family planning service providers did receive reinforcement on the safety and appropriate timing of modern contraceptive methods for use by postpartum women for both those who did breastfeed and for those who did not.
The brochure and key messages delivered through group education were adapted from the Extending Health Service Delivery (ESD) project and the Access to Clinical and Community Maternal, Neonatal and Women's Health Services (ACCESS-FP) project, both funded by the United States Agency for International Development (USAID). [23][24][25] In addition to improving basic knowledge about postpartum family planning, the materials were also informed by the Health Belief Model. Specifically, the intervention was designed to act on women's perceptions, including perceived severity of an unplanned pregnancy, perceived susceptibility to an unplanned pregnancy if sexually active and not using a contraceptive method, perceived benefits of contraception to prevent an unplanned pregnancy, and perceived barriers to accessing family planning services ( Figure 1).
To accommodate the differing staffing and physical structure of health centers, each facility was asked to determine the most appropriate way to structure their integrated service delivery. We considered this ability to structure the service delivery according to the local context as an important element for sustainability. All facilities opted to offer family planning services concurrently with immunization so that once a woman was finished with her infant's immunization she could see the family planning provider directly.
In addition to these 4 main intervention components, District Health Managers (DHMs), who already conducted routine supervision visits to health care facilities, were accompanied by a study staff member to observe the integrated service delivery on a quarterly basis. The DHM and study staff used a checklist designed by the study to assess the quality of the services being offered and to reinforce activities when necessary. This information was also used by the study to ensure the intervention was being delivered as intended and to take corrective action if necessary.
To put the intervention in place, immunization service providers, family planning services providers, and facility supervisors from intervention group facilities participated in a 3-day training session. This training covered essential postpartum family planning topics and the intervention components, in addition to a general refresher on family planning methods that the Ministry of Health opted to add. Training materials were also adapted from the ESD and ACCESS-FP materials. During the course of the study, it was observed that several trained providers had been transferred out of some of the intervention facilities leaving few or no providers who had undergone the initial training, so a 1-day refresher training on the intervention was carried out in each of the 7 intervention facilities. Health care providers and facilities in In Rwanda, clients review the integration project's brochure about family planning while awaiting immunization for their infants.
the control group received no additional training or support.
Study Sample
The sample size of postpartum women was calculated to be able to detect a 12 percentage point difference in current modern contraceptive method use from baseline to endline between the intervention and the control groups (difference of differences), assuming a baseline contraceptive prevalence of 27% based on DHS data available at the time. 26 Our primary sampling unit (PSU) was public primary health care facilities, and our secondary sampling unit included postpartum women who brought their infants for immunization. Adjusting for clustering at the facility level and assuming an intra-class correlation of 0.023 based on similar work in Madagascar, 27 we estimated that a sample of 55 women interviewed at each time point in each of 14 health facilities, for a total of 770 women (385 participants per study arm) at both baseline and endline, would have 80% power to detect an intervention effect for a 1-sided test at the .05 significance level.
At the time of the study, there were 253 public primary health care centers in Rwanda ( Figure 2).
The sampling frame was restricted to public primary health care facilities with an average monthly client volume of at least 50 infant measles immunizations (n = 149) to ensure data collection could be completed within a reasonable time frame. Because health care is decentralized to the district level and district health managers supervise all facilities monthly, there was concern that if more than one facility was selected from a district and if the facilities were randomized to different treatment groups, there could be contamination across facilities. Therefore, we stratified the sampling frame by districts and then selected a random sample such that no more than one facility was sampled in any district, using the ''Surveyselect'' procedure in SAS System for Windows, version 9.3. 28 Random allocation of facilities to treatment arms was also carried out using SAS/STAT software, version 9.3. 28 Within the health centers, women were enrolled to participate in the study in the order they arrived during data collection until the sample size was achieved in each facility. Eligible participants included adult women, 21 years or older, or women 18 to 20 who had achieved legal majority status by emancipation due to marriage, who brought their own infants between 6 and 12 months of age to immunization services. We sampled women who were 6 to 12 months postpartum so that all women in the sample who desired to initiate a modern contraceptive method to delay or limit a new pregnancy should have done so by that point, if they followed the information and counseling being offered through the intervention. Prior to 6 months, some women may have been actively, or passively, protected by LAM; however, the intervention messaging clearly indicated to women that by 6 months postpartum, they should initiate a contraceptive method besides LAM if they wished to use family planning.
Data Collection and Measures
Trained data collectors obtained written informed consent from all study participants and then interviewed study participants using structured a Women were recruited if their babies were between 6 months to 1 year old (180-365 days), as calculated by their birthdate. In a handful of cases, mothers of infants who were near but outside the range of 180-365 days completed interviews because they reported their infants as being of eligible age, but the age differed slightly once calculated in days and were thus excluded from analyses. questionnaires during exit interviews conducted in a private setting within the health facility. Data were collected on: (1) participant demographics; (2) reproductive and family planning history; (3) current pregnancy risk (infant age, breastfeeding status, status of menses, return to sexual activity); (4) postpartum family planning knowledge; and (5) perceptions toward postpartum family planning, including perceived susceptibility to and severity of an unplanned pregnancy, perceived benefits of family planning, perceived barriers to using family planning, and self-efficacy for using family planning. Participants in both groups were asked about the acceptability of integrating family planning services into infant immunization services and on satisfaction with their immunization service visit that day. The primary outcome variable was self-reported current modern contraceptive method use. Modern methods comprised oral contraceptive pills, injectable contraceptives, contraceptive implants, female or male sterilization, intrauterine devices, male or female condoms, spermicides, or the Standard Days Method. For this study, emergency contraception was not counted as a modern method. LAM was not included because all participants were more than 6 months postpartum. Because population-level data demonstrate that family planning use increases with woman's age, parity, and education and is associated with relationship status and partner approval, we included these as control variables. 20,26 Occupation, which is related to education, was also included as a control variable.
Immunization service data, including the monthly number of immunization clients by immunization type (e.g., measles, polio, etc.), were collected for the duration of the intervention beginning July 2010. Data on costs associated with carrying out the intervention were also recorded, including material development and reproduction, staff training, and quarterly supportive supervisory visits.
Data Analysis and Hypotheses
The main hypothesis tested in this study was that postpartum women who attend immunization clinic services for their infants ages 6-12 months in the intervention facilities will be more likely to use a modern contraceptive method than postpartum women who attend immunization clinic services for their infants, ages 6-12 months, in control group facilities. We tested this hypothesis with a linear mixed regression model using a 1-sided test at the .05 significance level with 12 degrees of freedom. The model controlled for age, parity, education, occupation, current relationship status, and perceived partner family planning approval, and it accounted for clustering by facility and time.
We also used linear mixed regression models to examine differences between groups and time for all descriptive variables separately and for bivariate analyses between individual characteristics and current modern contraceptive method use. The same approach was used to test differences between study groups and HBM variables. All models accounted for cluster effects. Analyses were carried out with the SAS System for Windows, version 9.3. 28 Costing analyses were carried out using Microsoft Excel. We calculated the cost of the intervention per facility and estimated both the cost per full exposure, assuming 4 immunization visits over the 12-month extended postpartum period, and cost of the intervention per new acceptor. We used the number of babies who received measles vaccines from January-September 2011 (N = 5,036) in the intervention facilities as a proxy for the number of women who received the full 4-visit exposure. Because this was a 2-group, separate sample study, we did not have the actual number of new family planning acceptors. Instead, the number of family planning acceptors was calculated based on the number of women estimated to have been exposed to the intervention multiplied by the effect of the intervention shown in the study. We then divided the total incremental cost of the intervention by that number.
Ethics Approval
This study was approved by the Rwanda National Ethics Committee and FHI 360's Protection of Human Subjects Committee, and it is registered on the US National Institutes of Health ClinicalTrials.gov database, registry #NCT01115361.
Participant Characteristics
The final sample included in the analysis comprised 825 women from the intervention group and 829 women from the control group. Study participants were demographically similar in intervention and control groups at baseline and endline (Table 1). Average age among participants was 27-28 years, and nearly all were Christian-fairly evenly divided between Protestantism and Catholicism. Most women had at least a primary school education, and more than three-fourths were literate. Just over half of all women reported working outside the home.
Contraceptive Use and Unmet Need
Modern contraceptive method use was relatively high among study participants; roughly half of women across study groups at both time points were using a modern contraceptive method ( Table 2). Unmet contraceptive need was also high (45.6% in the control group and 39.2% in the intervention group at endline); nearly all women not currently using a modern method desired to space or limit their births ( Table 2). Just under half of respondents reported desiring no more children; among those who desired additional children, nearly all wanted to wait at least 2 years before their next pregnancy. Most respondents in both groups at both time points had returned to sexual activity since childbirth.
Intervention Effect
To assess the effectiveness of the intervention to increase modern contraceptive method use, we examined the change in modern method use between intervention and control groups across time points (Table 3). Results showed that the intervention had a statistically significant and positive effect on modern method use among intervention group participants compared with control group participants (regression coefficient, 0.15; 90% confidence interval [CI], 0.04 to 0.26). In other words, we observed an 8% increase in the intervention group and a 7% decrease in the control, resulting in a 15 percentage point difference between the intervention and control groups when comparing baseline to follow-up results. Although we conducted a 1-sided significance test, this effect was also significant at the 2-sided test with an alpha = .05 (95% CI, 0.01 to 0.29).
Health Belief Model Effects
Concerning the HBM concepts and contraceptive method use, women with a higher perceived susceptibility to an unplanned pregnancy were more likely than those with a lower perceived susceptibility to use a modern contraceptive method at both baseline and follow-up (linear mixed model regression estimate, 0.24; P = .05) ( Table 4). Greater perceived severity of an unplanned pregnancy and perceived benefits of family planning were also significantly and positively associated with greater family planning use (regression estimates, 0.04 and 0.06, respectively). Greater perceived barriers to family planning use was associated with lower family planning use (regression estimate, -0.14); however, this finding was not statistically significant.
We found a small but statistically significant change in perceived susceptibility to an unplanned pregnancy between intervention and control groups from baseline to follow-up; perceived susceptibility increased from baseline to endline in intervention facilities and decreased in control facilities. However, no other significant changes were observed among HBM concepts because perceptions of severity of an unplanned pregnancy and benefits of family planning were already very high in both groups, and perceived barriers to family planning were relatively low (data not shown).
Reasons for Non-Use
At endline, we asked those not currently using a modern method (non-users) their reasons for nonuse (data not shown). Responses were similar in both groups; the most common reason reported was that participants were awaiting menses to return to w Statistically significant difference between baseline and endline within group at P = .05. ¥ Statistically significant difference between groups at P = .05.
initiate a method (46.1% in the control group, 50.3% in the intervention group). Other important reasons for not using a contraceptive method included fear of side effects or health problems associated with family planning (19.4% in the control group, 13.4% in the intervention group), as well as currently breastfeeding (8.2% in the control group, 11.2% in the intervention group).
Postpartum Women's Perspectives on Family Planning and Immunization Service Integration
At endline, women in both study groups were asked about their perspectives on integrating family planning services components into infant immunization services (Table 5). Women in both groups nearly universally (98%) agreed that infant immunization services were a good time to receive information on family planning options. Approximately three-quarters of all women also stated that they preferred to get family planning services on the same day when they bring their infants for immunizations. Fewer than 20% overall stated they did not think the immunization service visit was the appropriate time to receive family planning information; most of these women stated they preferred to come when they did not have their babies with them (data not shown). Women in both groups were very satisfied with the services they received (Table 5). There was no difference between study groups in overall satisfaction, satisfaction with the wait time, or in the proportion of women who stated that providers treated them with respect. However, more women in the intervention group reported that they were given the opportunity to ask health-related questions and that the provider was able to give them the information they needed.
Immunization Service Statistics
Given the possibility that integrating family planning education, pregnancy risk screening, and family planning services could have a negative effect on immunization uptake, we collected service data on measles immunization (scheduled between 9 and 12 months of age) at all facilities ( Figure 3). There was considerable monthly variation in the numbers of infants immunized at all facilities. Observed peaks are likely due to periodic community outreach efforts (such as Mother and Child Health Weeks, which occur in March and November each year) during immunization campaigns to reach unimmunized children. We observed no downward trend in the numbers of infants immunized for measles in the intervention facilities over the course of the study period, indicating that once family planning services were integrated into immunization services within intervention facilities, immunization uptake did not decrease. Although we present data only for the measles vaccine in Figure 3, trends were similar Most non-users said they were waiting for menses to return before initiating a method.
There were no downward trends in the number of infants immunized for measles in the integrated intervention facilities.
for all recorded vaccines, including the first 3 doses of diphtheria, tetanus, and pertussis vaccine (DTP).
Costs of the Intervention
Intervention costs included:
Document development and reproduction (job aid and brochures) Preparation visits to the intervention sites
Training providers and supervisors
Launching activities in intervention sites
Quarterly supervisions to the intervention sites during implementation One-day refresher training for all intervention sites After subtracting costs not relevant to scaleup (e.g., staff time to develop training materials), the total cost was US$24,203. This resulted in a cost of US$4.81 per woman for full exposure to the intervention (at 6, 10, and 14-week and at 6-month immunization visits). Assuming that 15% of women would accept a new contraceptive method, based on the 15 percentage point difference observed between the intervention and control groups, the intervention cost was US$32.06 per new family planning acceptor.
DISCUSSION
This study contributes to the small global body of evidence on the effectiveness of integrated health service delivery and supports previous findings that integrating family planning service components into infant immunization services can be an effective, acceptable, and feasible strategy for increasing family planning service uptake among postpartum women. Clients were overall very positive about receiving family planning information and services during infant immunization, and integration did not adversely affect reported service satisfaction. We also demonstrated that adding family planning service components did not negatively affect immunization service uptake, which has been a concern among some public health professionals. 13,29 The intervention sought not only to increase modern contraceptive method uptake but also to improve postpartum family planning knowledge among women. Given that half of non-users at endline stated that they were awaiting menses to return to initiate a method despite being 6 or more months postpartum, it appears that the intervention was not completely successful at dispelling this misconception. Our findings are consistent with several other investigations that have explored the relationship between postpartum amenorrhea and initiation of a family planning method, which found that many women await the return of menses before initiating a family planning method, potentially placing them at risk for an unintended pregnancy. [30][31][32] More efforts are needed to ensure women and their providers understand that postpartum women can become pregnant before their menses return, even while breastfeeding, and that sexually active women who desire to space or limit their pregnancies should initiate an effective modern contraceptive method as early as possible. Additional research is needed to better understand the persistence of these misperceptions and to test strategies to address them.
The integration approach employed in this study permitted a degree of flexibility within health facilities with regard to how the family planning and immunization services were co-delivered, which we believe is important to successful implementation and sustainability. However, several systemic challenges affected integrated service implementation. We selected public primary health centers where both immunization and family planning services are offered. In Rwanda, family planning services are offered every day of the week, although not necessarily at the same time as immunization services. In settings where family planning services are not offered daily, this intervention may require significant changes to service delivery, which may or may not be feasible. We also found that a single training session for providers was insufficient to successful implementation and that supportive supervision was key to successful, ongoing implementation. Until providers are experienced in the new service delivery strategy, maintaining the level of supportive supervision provided in the study intervention could prove a challenge to some district health personnel.
We also found that, despite a conducive service delivery setting, provider attrition through transfers caused trained providers to be replaced with untrained providers, affecting service delivery. The most permanent comprehensive solution to this problem would be to include training on integrated family planning and immunization service delivery in preservice education for all providers and to systematically train all current primary care providers on the strategy through new or existing training opportunities.
At US$32.06 per new acceptor, this strategy could be considered relatively expensive; however, many expenses can be reduced or eliminated by incorporating the work into existing activities. Two of the largest expenses were provider training and quarterly supervision visits, both of which could be integrated into existing training and supervision activities.
Limitations
This study used a rigorous study design; however, it is not without limitations. We used a separate sample approach to measure intervention effects. The primary reason for enrolling separate samples of postpartum women was to enroll women who could be fully exposed to the intervention. Although most women in Rwanda bring their babies to health facilities to receive immunizations, only about half of women delivered in health facilities at the time of the study. 26 Sampling women at the time of delivery and following them during the postpartum period would have limited our sampling frame to only women who delivered in a facility as opposed to all women who bring their infants to immunization services. We believe the information gained through this sampling strategy outweighs its limitations. A More efforts are needed to ensure women understand that they can become pregnant before their menses return in the postpartum period.
second limitation was the relatively small number of facilities included in the study sample. Having only 7 PSU per study arm decreased the likelihood that randomization led to completely comparable study groups. In response, we chose to examine the difference in modern contraceptive method use from baseline to endline between study groups (difference of differences), reducing the study's power to detect statistically significant differences. Despite this limitation, we observed a statistically significant and positive intervention effect.
Study Implications
This rigorous trial demonstrated that integrating family planning service components into infant immunization services can be an effective, acceptable, and feasible strategy for increasing modern contraceptive use among postpartum women without negatively affecting immunization services. The study addressed some of the limitations previously noted in the literature on this topic by including a control group, collecting extensive process data to understand implementation fidelity, and collecting costing data.
Further research is needed to test this approach in other settings. Given that this is one of only a handful of research studies directly examining the effectiveness of integrated family planning and infant immunization services and that the study enrolled a relatively small number of health care facilities within one country setting, attempting to replicate these results elsewhere is important. Additionally, a strong policy environment aiming to improve maternal and child health, extensive engagement of the Ministry of Health in the planning and implementation of the intervention, as well as intensive efforts to strengthen the country's health system likely played important roles in the success of the intervention. Replicating this work in settings where political or health system support is not as strong may not generate the same results.
In March 2013, the Rwanda Ministry of Health held a national meeting to discuss results of this and other studies focused on improving family planning services and contraceptive uptake. Participants recommended national scale-up of the intervention and initiated discussions on changes to service delivery guidelines, supervision requirements, training curricula, and data collection systems to support scale-up. Recommendations were sent to the national Maternal and Child Health Technical Working Group for incorporation into Ministry of Health and partner organization work plans. | 2017-10-17T10:13:13.077Z | 2016-03-01T00:00:00.000 | {
"year": 2016,
"sha1": "f5350ebb8e7fb0d851154bf487e13dcb606ad583",
"oa_license": "CCBY",
"oa_url": "http://www.ghspjournal.org/content/ghsp/4/1/73.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d600e8b41ba7eadf358800099f5966f911fcb72b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3228883 | pes2o/s2orc | v3-fos-license | Monosodium glutamate as a tool to reduce sodium in foodstuffs: Technological and safety aspects
Abstract Sodium chloride (NaCl) is the most commonly used ingredient to provide salty taste to foods. However, excess sodium in the bloodstream has been associated with the development of several chronic noncommunicable diseases. In order to limit sodium intake to levels considered safe, the World Health Organization (WHO) recommends for adults a daily intake of not more than 5 g of NaCl (less than 2 g of sodium). One of the strategic actions recommended by the Pan American Health Organization (PAHO) to reduce sodium intake is reformulation of processed foods. This recommendation indicates there is an urgent need to find salt substitutes, and umami compounds have been pointed as an alternative strategy. Like salty, umami is also a basic taste and the major compound associated to umami is monosodium L‐glutamate (MSG). The available scientific data on the toxicity of MSG has been evaluated by scientific committees and regulatory agencies. The Joint FAO/WHO Expert Committee on Food Additives and the Scientific Committee on Food of the European Commission established an acceptable daily intake (ADI) not specified, which indicated that the substance offers no health risk when used as a food additive. The United States Food and Drug Administration and the Federation of American Societies for Experimental Biology classified MSG as a Generally Recognized as Safe (GRAS) substance. In this paper, an overview about salty and umami taste physiology, the potential applications of MSG use to reduce sodium content in specific industrialized foods and safety aspects of MSG as food additive are presented.
| INTRODUCTION
Over the last few years, significant changes have occurred in the global food market, which are mainly due to the growth of the population and cities, including Brazil (Trading Economics 2013). These facts may be one of the reasons for the increase in the commerce of processed foods, such as canned and ready-to-eat products. In 2014, marketing studies verified an increase in 9% in this sector, and the expectation in 2019 is that it could reach a retail volume of 13% of food sales (3.1 billion units of canned products) (Euromonitor International 2015).
According to a survey conducted by Brazil Food Trends 2020(Ibope Inteligência 2010, this improvement in the market of processed foods may have partly occurred in Brazil due to changes in lifestyle and an increased demand from consumers for more convenient products, in addition to new launchings and promotional activities. The results also pointed out that sensorial quality (indicated by 23% of the consumers interviewed), along with health and wellness (indicated by 21% of the consumers interviewed), were shown by researchers as future consumption trends, which reflect the new demands of consumers.
In this sense, certain types of products have been more highly valued, especially those with reduced sodium, sugar and fat content.
Sodium chloride (NaCl) is the most used ingredient to provide a general flavor in foods. In addition to its role in taste, NaCl and other food additives that contain sodium have other functions such as preservation, the acceleration of fermentation reactions and texture maintenance (Henney, Taylor, & Boon, 2010). However, excess sodium in the bloodstream has been associated with the development of various noncommunicable diseases (NCDs), also known as chronic diseases that are not passed from person to person, including hypertension and other heart problems, kidney disease, stomach cancer, and osteoporosis. To limit sodium intake at safe levels, the World Health Organization (WHO) recommends a maximum daily consumption of 5 g of salt (NaCl) for adults, which is equivalent to less than 2 g of sodium/day (WHO 2007).
Based on data published by the Brazilian Household Budget Survey (Pesquisa de Orçamentos Familiares -POF) carried out in 2002-2003 and 2008-2009, Brazilians have been consuming a large amount of sodium (4.7 g of sodium/day), which corresponds to more than twice the safe level (less than 2 g of sodium/day) proposed by the World Health Organization (WHO, 2007). In 2013, a study conduct by Sarno, Claro, Levy, Bandoni, and Monteiro (2013) also demonstrated that processed foods contributed to at least 25% of the sodium intake (mainly the sodium coming from NaCl addition) by middle-class and upper-class families of the country.
The results of these studies were used to strengthen the commitments between the Brazilian Ministry of Health (Ministério da Saúde -MS) and the Brazilian Association of Food Industries (Associação Brasileira das Indústrias da Alimentação -ABIA), which were signed in 2007 and stimulated food producers to improve the supply of health foodstuffs, including products with reduced sodium content. To develop strategies to reduce sodium intake to 2 g/day until 2020, the government renewed the agreements (BRAZIL 2007).
Among these strategies, which should also take into account the sensory quality of foods, the use of flavor enhancers such as monosodium L-glutamate (MSG) can be considered a promising alternative with a great potential for application in the food industry (Jinap & Hajeb, 2010). MSG, which is the sodium salt of L-glutamic acid (or Lglutamate -dissociated form), is the most well-known flavor enhancer used in foods, but other molecules such as nucleotides (inosinate and guanylate), other glutamate salts associated with ammonium, potassium and calcium, and other additives that contain elevated concentrations of L-glutamate, named Natural Flavor Enhancers (NFE), such as yeast extract and products from wheat and soy fermentation, are also available in the market to enhance the flavor of foods (McGough, Sato, Rankin, & Sindelar, 2012;Yamaguchi & Takahashi, 1984).
According to the Technical Report on Sodium Content in Processed Foods published by the Brazilian Health Surveillance Agency (Agência Nacional de Vigilância Sanitária -ANVISA) in 2012 and 2013, some industrialized products such as soups, stocks and seasonings, instant noodles, certain snacks, processed meats and parmesan grated cheese exhibit very high sodium contents (ANVISA 2012(ANVISA , 2013Sarno et al., 2009). The use of MSG and other flavor enhancers in these foodstuffs is allowed in Brazil (ANVISA, 2001) and by the Southern Common Market (MERCOSUR 2010). Thus, in this paper a review about salty and umami taste physiology, the potential applications of MSG to reduce sodium content in specific industrialized foods and safety aspects of MSG as food additive are presented, in order to contribute to the development of food products reduced in sodium without impairing the sensorial quality of the foods and the health of the population.
| TASTE PHYSIOLOGY: SALTY × UMAMI TASTES
Physiological mechanisms involved in taste perception have been discussed by researchers since the beginning of the 20th century (Trivedi, 2012).
Regarding the salty taste, which is promoted by sodium chloride and potassium chloride, among other substances, the hypothesis has suggested that ion dissociation occurs when these molecules are in contact with saliva. Sodium ions, specifically cross-specific ion channels, named ENaCs, which are in the membranes of taste cells. After the ions go in, membrane depolarization occurs, charging it positively.
This depolarization increases the membrane electric potential and stimulates taste nerves, sending signals to the brain to recognize the salty taste or the presence of ions that activate the electric potential (Beauchamp & Stein, 2008;Chandrashekar et al., 2010;Geran & Spector, 2000).
In relation to the umami taste, glutamate or nucleotides get in contact with their specific G protein-coupled receptor -GPCR (mGluR4 -metabotropic glutamate receptor 4, TR1/TR3), which are present in the taste buds of the mammalian tongue (Nelson et al., 2002;Chaudhari et al. 2009). G-proteins are high-molecular mass compounds with different protein subtypes; each one has two subunits: Gα and Gβγ. One of the mechanisms earlier proposed suggested that substances responsible for umami activate the Gα subunit of GPCR.
However, more recent investigations have shown that the main pathway for umami taste transduction seems to be related to the Gβγ subunit (Chaudhari & Roper, 2010).
After the interaction between umami-GPCR, the βγ subunit disconnects from the Gαβγ complex and activates the enzyme phospholipase C, leading to generate IP3 (inositol triphosphate). IP3 connects with calcium channels, existing in the endoplasmic reticulum, thus stimulating the opening and release of calcium (Ca ++ ) through the cytosol. Calcium ions, now in elevated concentrations in the cytoplasm, connect with TRPM5 channels in the membrane, promoting the fast influx of sodium through the cell and, consequently, depolarizing the membrane. The combined action between the elevation of calcium and membrane depolarization provokes the opening of the gap junctions, which are probably composed by pannexins (Pax1), thus promoting the release of a large amount of ATP to the extracellular space (Chaudhari & Roper, 2010;Kinnamon, 2009).
The released ATP stimulates the afferent nerve fibers, and at the same time, they excite adjacent presynaptic cells, which promote the release of 5-HT (serotonin) and NE (norepinephrine), providing the gustative sensation in the brain. So far, it is known that if the taste buds are stimulated with umami substances, signals are reflected in the primary and orbitofrontal gustative cortex (de Araújo, Kringelbach, Rolls, & Hobden, 2003).
The interactions between the sensations of umami and salty tastes were evaluated by Yamaguchi and Kimizuka (1979). The authors verified that some intensification of the salty taste occurs when umami substances are present. The main impact is the increase in salivary secretion, smoothness and continuity of the flavor in the mouth.
Nevertheless, the exact mechanisms of the gustative reception are not clear and need more investigation (Chaudhari & Roper, 2010).
| APPLICATION OF FLAVOR ENHANCERS AS A TOOL TO REDUCE SODIUM CONTENT IN SPECIFIC FOOD PRODUCTS
Free glutamate occurs naturally in several foods, such as tomatoes, parmesan cheese, meats, peas, corn, mushrooms, and asparagus, among others. On the other hand, free glutamate may also be found in processed foods as a result of the use of MSG as a flavor enhancer (Yamaguchi & Ninomiya, 2000).
Since the 1950s, flavor enhancers, such as MSG, have been used in Brazil. ANVISA allows the use of MSG in several meat products, canned vegetables, food service preparations and fillings, among others. For milk products, the use is allowed only in grated cheese. For all products, the regulation specifies the use as quantum satis (sufficient to obtain the desired technological effect) (ANVISA 2001(ANVISA , 2010(ANVISA , 2015. In comparison to NaCl, glutamate salts such as MSG or monoammonium L-glutamate and disodium inosinate and guanylate, have low or no sodium content ( Figure 1). There is an appropriate amount of MSG for NaCl replacement with maintaining acceptance of the food. This is because an excess of MSG does not promote the umami taste and, on the contrary, could lead to an undesirable sensation (Jinap & Hajeb, 2010). The recommendation for MSG use as food additive is 0.1%-0.8% of weight, which corresponds to the amount of free L-glutamate present naturally in tomato or parmesan cheese (Beyreuther et al., 2007). For MSG, the amount of sodium is 12.28 g/100 g, and this is 1/3 of the sodium when compared to NaCl (39.34 g/100 g). To use MSG in a homemade recipe, such as 500 g of foodstuff (rice, minced meat, etc.), a simple replacement of 1/2 teaspoon of NaCl (2.5 g) by 1/2 teaspoon of MSG (2.0 g) reduces sodium content in about 37% (Maluly, Pagani, & Capparelo, 2013).
The following sections present potential applications of MSG to reduce sodium content in specific foods. Yamaguchi and Takahashi (1984) were some of the first researchers who tested different concentrations of NaCl and MSG in soups with reduced sodium content. The authors evaluated sensory panels via the consumption of sumashi-jiru, a popular soup in Japan, made with dried bonito fish. The scales used by the sensory panels varied in a range of seven points that considered the amount of NaCl and palatability: from "extremely strong or palatable" (+3) to "extremely weak or unpalatable" (−3). Each panelist evaluated nine samples randomly and considered the concentration of 0.81 g/100 g of NaCl and 0.38 g/100 g of MSG as an ideal formulation. The authors verified that the reduction in the NaCl amount did not affect the palatability of the salty taste. With these concentrations, it was possible to reduce sodium content and maintain acceptability. This research suggested that to increase the palatability of reduced sodium products, MSG content should be tested at fixed concentrations while varying the levels of NaCl until finding the most appropriate combination. This is the best strategy to reduce the total sodium content in soups without influencing their palatability.
| Soups
A recent study conducted by Jinap et al. (2016) investigated the acceptance of a sodium reduction in spicy soups (curry chicken and chili chicken) by Malaysian panelists, replacing NaCl with MSG. The authors verified that MSG could maintain the acceptability of the soups. The high score of acceptability was given for the soups with 0.8 g/100 g and 0.7 g/100 g of NaCl and MSG, respectively. These amounts corroborate a previous study published by Yamaguchi and Takahashi (1984), who noted that MSG could reduce the sodium content by 32.5%.
| Stocks and seasonings
Stocks and seasonings containing NaCl are generally the main vehicles to elevate sodium consumption, according to the POFs (2002-2003 and 2008-2009) (Sarno et al., 2009(Sarno et al., , 2013. To evaluate the acceptance of stocks and seasonings with lowsodium content, Rodrigues, Junqueira, et al. (2014) performed a F I G U R E 1 Structures and sodium content of monosodium glutamate monohydrate, disodium inosinate and disodium guanylate sensorial evaluation with garlic seasonings in three recipes of rice with 0%, 25% and 50% less NaCl. The seasonings were made with different proportions of NaCl, KCl (potassium chloride) and MSG. The authors verified that the panelists did not notice a strange or bad taste in the preparations, although it was reported that they contained less salt. In general, the preparations with less sodium were well accepted and the authors concluded that to these panelists, this choice could be a good alternative to reduce sodium content in homemade recipes.
Since the first industrialized stock cube was created by Julius Maggi in 1863 in Switzerland, there have been significant modifications on the formulations and consumption profile of this product. The western population is the largest consumer of stock cubes, mainly due to the convenience and flavor enhancement of the meals. However, NaCl is used in large concentrations in these products, which led to a fourth monitoring agreement to reduce sodium content, which was signed between the ABIA and the Brazilian Ministry of Health in 2013 (IDEC 2014).
MSG is commonly added to stock cubes since it can intensify the overall flavor and increase the impact, continuity and complexity of the final preparation. For stocks, the technological recommendation is 15%-25%, and for seasonings, the recommended levels are 50%-70% (when up to 10% NaCl is used) or 8%-10% (when more than 10% of NaCl is present in the product). These percentages can vary according to the ingredients added. Good Manufacturing Practices (GMP) are suggested in order to comply with the technological recommendations and maintain the sensorial characteristics of foods .
A Malaysian study conducted with commercial products identified high concentrations of free glutamate in the chicken stock powder (170.90 ± 6.40 mg/g). The authors also verified that the high concentrations were not from the addition of MSG alone but from the addition of yeast extract. These concentrations were considered safe, but slightly high for some technological protocols. Therefore, sensorial and composition analyses are suggested so that reduced sodium preparations do not exceed the technological limit of the umami taste sensitivity (Jinap & Hajeb, 2010;Khairunnisak, Azizah, Jinap, & NurulIzzah, 2009).
| Instant noodles
Noodles are produced from wheat flour, buckwheat flour, rice flour or maize starch. These formulations can be modified according to the technology involved (Fu, 2008). Noodle dough needs NaCl, which has three different functions: gluten strengthening and elasticity, flavor and texture improvement, and the inhibition of microorganism growth and enzyme activation. Generally, 1%-3% and 8% NaCl are added to noodle formulations and Udon (a thick dough made with wheat flour or buckwheat), respectively (Fu, 2008). To rice flour noodles, which contain at least 7% protein, the main function of NaCl is to sustain the elasticity. Nevertheless, NaCl can reduce water penetration and increase the cooking time if the salt content is not controlled (Sangpring, Fukuoka, & Ratanasumawong, 2015).
In addition to the use of NaCl in noodle dough, salt is also added in the seasoning. The analyses of 22 samples of instant noodles available in the Brazilian market raised a great concern mainly because the results showed average levels of sodium content at 1,798 mg/100 g, with concentrations ranging between 1,435 mg/100 g and 2,160 mg/100 g when considering the overall product (dough + seasoning) and its reconstituted version (ANVISA 2012).
The umami taste has a large impact in this kind of foodstuff since the seasonings are derived from meat and vegetables stocks, which are rich in glutamate, inosine and guanosine monophosphate. In the food industry, the use of umami substances in seasonings is extensive, and their recommended concentrations are 10-17 g/100 g for MSG or monoammonium glutamate (MAG), 0.5-0.7 g/100 g for disodium inosinate, and 0.3-0.7 g/100 g for the mixture of I + G (disodium inosinate + guanylate) .
In addition, it is suggested here that the technological recommendations concerning the use of flavor enhancers not be exceeded.
Thereby, to reduce sodium in these products, the first approach is to create strategic government campaigns and warn the population about the reward of flavor variety in order to avoid food monotony, which will depend on consumption education and food behavior modifications (Henney et al., 2010).
| Meat products
In general, meat products contain high sodium contents. NaCl is added for preservation purposes since it can prevent the growth of some pathogenic bacteria, extend the shelf life, ensure safety and texture, and make the products softer (mainly lean meat) and with better flavor. Table 1 shows some processed meat products consumed in Brazil and their sodium contents, according to the Brazilian Food Composition Table (TACO, 2011).
In an attempt to minimize the problems associated with high sodium consumption, diverse protocols have been developed without compromising sensorial quality and the safety of meat products. Thus, Ruusunen, Simolin, and Puolanne (2001) The concentrations of 1.5 g/100 g and 0.75 g/100 g of NaCl associated with 0.3 g/100 g of MSG were used in the different formulations. The results showed that MSG did not interfere directly with the final scores, but it contributed to the increase in acceptance scores among products containing 0.75 g NaCl/100 g (50% reduction).
| Snacks
In addition to potato chips, different kinds of snacks have emerged in the market, such as tortilla chips in Mexico and the US, pretzels in Italy and Germany, and popcorn, hazelnuts, nuts and seeds, and meat products, such as jerky in the US. Among Brazilian commercial snacks, other than potatoes, tortillas and extruded snacks, there are some nuts, such as peanuts and Brazil nuts. Brazilian customers are beginning to look for healthy snacks, such as those made with vegetables and with reduced sodium and fat content (Barbosa, Madi, Toledo, & Rego, 2010). Consequently, the food industry has been trying to differentiate their products in this competitive market. Thus, flavor enhancers, as MSG, have been applied in snacks to reduce sodium content. However, the total substitution of NaCl by flavor enhancers is not possible. Nevertheless, these substances are used to harmonize the salty taste during taste perception.
MSG stimulates the salivation and intensifies other flavors, such as aromas from certain herbs used in these products (Ainsworth & Plunkett, 2007).
To reduce NaCl in snacks, the technological recommendation of MSG is up to 0.5 g/100 g. This is mainly due to the interactions with the salty taste and its property of covering-up the residual bitter taste promoted by NaCl substitutes, such as potassium chloride (Yamaguchi & Kimizuka, 1979).
Attempts to use MSG in potato chips were made to reduce oil uptake and salt content. A solution containing NaCl (0.5 g/100 g) and MSG (0.03-0.3 g/100 g) was used in six different tests in vacuum conditions. Consequently, a transfer mass phenomenon was produced by the impregnation of the solutes into the potato and the loss of water from the pores. The entrance of the two solutes into the pores, partially substituting the water lost in the process, reduced the evaporated water bubbles inside the tissue. The fried potatoes, with 19% less water, consumed less heat flow (160-165°C) to change the vapor state, whereas a larger amount of sensitive heat was required to act in the dry matter, which favors the decrease in the oil flow into the pores. This could also lead to the decrease in acrylamide formation, which is desirable from a health and safety point of view and to the formation of a crunchy layer, which is desirable from a sensorial point of view. Furthermore, NaCl and MSG were used in low concentrations when compared to the usual conditions, reducing the sodium content (Silvera, 2013).
Three formulations of brines used for the preparation of Mozzarella cheese were evaluated: A (without reduced NaCl -300 g/L); B (25% reduced NaCl -225 g/L + 64.6 g/L of KCl + 40.2 g/L of MSG); C (50% reduced NaCl -150 g/L + 43 g/L of KCl + 160.8 g/L of MSG). The formulations B and C resulted in a 30 and 54% sodium reduction, respectively. These reductions were obtained due to the diffusion coefficients of the salts in brines. After Time-Intensity (TI) evaluation of the salty taste and Temporal Dominance of Sensations (TDS) tests, it was possible to verify that the modifications of the sodium content did not affect the palatability significantly; however, a lower salty taste sensation was reported. The reduced sodium formulations were widely accepted, and the authors discussed that the use of MSG and other flavor enhancers in formulations containing KCl are crucial for avoiding bitter or metallic residues .
Cream cheese was investigated by Silva et al. (2013). Different types of NaCl substitutes were tested including KCl, magnesium and T A B L E 1 Processed meat products and sodium content. Beyreuther et al. (2007) reported that flavor enhancers impart the umami taste and boost other flavors. Thus, researchers need to pay attention to their self-limiting addition of those food additives.
| SAFETY ASPECTS OF MONOSODIUM GLUTAMATE
The available scientific data on the potential toxicity of MSG included studies on acute, subchronic and chronic, toxicity, as well as studies on teratogenicity and reproductive toxicity, in different animal species such as rats, mice, dogs, and rabbits. A detailed discussion on the results of those studies was reported by Reyes, Areas, and Maluly (2013).
| Metabolism and pharmacokinetics
After ingestion, glutamate is absorbed by the cells of the gastrointestinal tract and is catabolized in the cytosol and mitochondria by the transamination reaction under the action of various enzymes present in the stomach, intestines, and colon. One of the products of this catabolism, α-ketoglutarate, can enter the tricarboxylic acid cycle, releasing energy (ATP) and carbon dioxide. Other metabolic products include lactate, glutathione, glutamine, alanine, and various other amino acids (Burrin & Stoll, 2009).
Most of the glutamate present in foods (up to 95%) is metabolized by the first-pass effect and is used as an energy source by the enterocytes of the intestinal mucosa, whether it was added as a food additive or was naturally present in food (Reeds, Burrin, Stoll, & Jahoor, 2000). Therefore, even after the ingestion of large amounts of protein in the diet, the glutamate levels in plasma are low due to its rapid metabolism in the intestinal mucosa cells. The ingested glutamate that is not metabolized in the gastrointestinal tract enters the hepatic portal circulation and is metabolized in the liver, generating energy via the Krebs cycle or being converted into urea for excretion in urine (Burrin & Stoll, 2009).
The food components may also reduce the plasma concentration of glutamate when compared to the oral administration of the substance in water, especially if the food is rich in metabolizable carbohydrates. These carbohydrates provide pyruvate as a substrate for glutamate in the mucosal cells, so more alanine is formed and less glutamate reaches the portal circulation (Stegink, Filer, & Baker, 1987).
| Toxicity
In tests performed with rats and mice, very low acute toxicity was verified after the oral administration of glutamate, with an LD 50 (lethal dose that kills 50% of the animals studied) ranging from 10 to 22.8 g/ kg bw (JECFA 1988;Walker & Lupien, 2000).
JECFA (1988) evaluated subchronic and chronic toxicity studies
conducted in rats and mice and that included the reproductive phase.
Data showed that long-term exposure to MSG, when administered in the diet at up to 4.0%, did not elicit any specific adverse effects in the evaluated animals. Similar results were observed in 2-year studies conducted with dogs who received 10% MSG in the diet.
Reproductive adverse effects and teratogenicity were not observed even when females were fed high doses of glutamate, indicating that the maternal diet does not increase the exposure of the fetus and neonate by transplacental transfer or by the nursing milk, respectively (JECFA, 1988).
| Estimated intake of glutamate from foods
The estimate of human exposure to glutamate from foods, for risk assessment purposes, will depend on the characteristics of the diet and should consider both the natural sources of the amino acid (glutamate bound to protein and free glutamate) and the use of its salts as flavor enhancers in processed foods.
Although the information regarding the amount of glutamate ingested by humans are scarce in the literature, the average intake of the forms naturally present in foods (as part of the protein or in the free form) in a Western diet has been estimated to be approximately 10 g/ day, which is equivalent to 0.17 g/kg bw, assuming a 60 kg person.
For glutamate used as a food additive in Western diet, the estimated intake ranged from 0.3 to 0.5 g/person/day (0.005-0.008 g/kg bw) for average consumers and reached 1 g/person/day (0.017 g/kg bw) for high consumers. In Asian countries, the glutamate intake from the addition of MSG or other salts is higher, ranging from 1.2 to 1.7 g/ person/day (0.02-0.03 g/kg bw) for average consumers and can reach 4 g/person/day (0.07 g/kg bw) for high consumers (Beyreuther et al., 2007).
Thus, the available data suggest that glutamate intake from the use of MSG as food additive is much lower when compared to the estimated values from glutamate naturally present in foods.
| Monosodium glutamate consumption and the Chinese Restaurant Syndrome
The safety of MSG has been widely discussed since the Chinese Restaurant Syndrome (or the complex of symptoms related to MSG) was first described by Kwok (1968), which reported a set of signs and symptoms including pain in the neck or head and weakness and palpitations after consuming Chinese food, or more precisely, food containing MSG. Also, some authors have linked the intake of MSG to other symptoms such as asthma, atopic dermatitis, urticaria, respiratory difficulty, and tachycardia (Allen, Delohery, & Baker, 1987;Gann, 1977;Ratner, Eshel, & Shoshani, 1984;Van Bever, Docx, & Stevens, 1989). Nevertheless, studies, including double blind, placebo-controlled investigations, performed to confirm the relationship between the intake of MSG and the development of the previously described symptoms show no plausible association, or the existence of a sensitive subpopulation to MSG, since self-identified sensitive individuals to MSG failed to show reproducibility of different symptoms (Geha et al., 2000;Yang, Drouin, Herbert, Mao, & Karsh, 1997).
These observations, together with the limitations that make it impossible to assume the existence of any positive association demonstrated in some studies, indicate that it is unlikely that the consumption of MSG is involved in the onset of the symptoms associated with the Chinese Restaurant Syndrome. In addition, scientific evidence does not suggest the participation of MSG in the onset of asthma, urticaria, angioedema or rhinitis (Williams & Woessner, 2009).
| Scientific Committee for Food (SCF)
An evaluation performed by the Scientific Committee for Food of the European Commission (SCF, 1991) resulted in findings similar to those reported by JECFA, that is, an ADI "not specified" can be attributed to MSG.
In 1995, the European Parliament established a limit of 10 g/kg (
| Food and Drug Administration (FDA) and the Federation of American Societies for Experimental Biology (FASEB)
In 1958, the United States FDA classified MSG as a substance "Generally Recognized as Safe" (GRAS). The agency maintained this classification when it reassessed the available data on the substance 20 years later. However, in the 1980s, due to concerns raised by some published studies showing associations between the consumption of MSG and induction of adverse health effects, the FDA asked FASEB a complete review of the scientific data regarding the safety of this food additive. FASEB, which was formed by a group of independent scientists, completed its evaluation in July 1995 (FASEB, 1995). According to the FDA, the results were consistent with the assessments previously conducted by the other committees (JECFA and SCF), confirming both the safety of MSG when used as a food additive at the recommended levels (0.1%-0.8% in food) and the lack of evidence that chronic exposure to MSG causes health problems in the general population.
| Brazilian Health Surveillance Agency (ANVISA)
In Brazil, ANVISA takes into consideration the safety assessments conducted by JECFA and US FDA. Thus, no restrictions were established for MSG as a food additive, which is used in accordance to Good Manufacturing Practices (GMP) in an amount sufficient to achieve the desired technological effect ("quantum satis") (ANVISA, 2010).
| FINAL COMMENTS
Considering the available data in the scientific literature, in addition to the information provided by the flavor enhancer industry, we could verify the use of umami substances as a strategy to reduce sodium in different foodstuffs (processed and homemade foods) without affecting the perception of saltiness and, therefore, contributing to the wellness and safety of the population. Many applications evaluated showed promising results, especially in those products with elevated sodium contents such as processed meat.
Despite concerns about the toxicity potential of MSG raised by some studies, regulatory agencies have demonstrated the safety of use of this food additive through toxicity assessments and randomized double blind, placebo-controlled studies. An ADI "not specified" or GRAS status has been allocated to glutamate and its salts, meaning that it can be used as a food additive in the necessary amount to achieve the desired technological effect. Nonetheless, in the European Union, a use limit of 10 g glutamate/kg of food has been established.
Other strategies, such as the use of nucleotides (IMP and GMP) and NFEs, could also be useful for enhancing products with reduced sodium content. The combination of different substances in a formulation could generate a larger impact in flavor continuity due to their synergistic effect when added at recommended concentrations to maintain the desirable flavor without exceeding the sensorial and technological limits.
Sensorial and physicochemical tests are always recommended to obtain higher quality products while respecting the preferences of the consumers and the lifestyles of the modern life.
ACKNOWLEDGMENT
The authors thank PROEX/CAPES (Process 3300301702P1) and the Institute for Glutamate Sciences in South America (IGSSA) for financial support.
CONFLICT OF INTEREST
The first author, Hellen D. B. Maluly, is a voluntary researcher and has been working as a Food Industry Consultant. She has interest in food additives that improve flavor in foods and ingredients that provide health benefits, both in the field of safety and technological application.
The other authors do not have conflicts of interest. | 2018-04-03T03:25:14.790Z | 2017-07-13T00:00:00.000 | {
"year": 2017,
"sha1": "a580aae240cb1b27208176ca89799c361e4b0972",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1002/fsn3.499",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d13173d3f730f6870858a0d1b0acd3aa2f19e387",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.